首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We define and prove a formal semantics divided into two complementary interacting components: the strictly linguistic (i.e. linguistically marked) semantics, we call linguistic agent (LA), and the strictly logical and referential semantics, we call rational agent (RA). This Linguistic \(\leftrightarrow \) Rational Agents’ Semantics (LRA semantics) applies to Deep Dependency trees (DD-trees) or more generally, to discourses, i.e. sequences of DD-trees, and interprets them by functional structures we call Meaning Representation Structures (MRS), similar to the DRT, but interpreted very differently. LRA semantics incrementally interprets the discourses by minimal finite models, called proto-models, in a monotonic logic of the LA and checks the proto-models with respect to the classical models of the RA. The proto-model is considered as the linguistic sense of the discourse. We define in full detail the LA which, as we believe, must be universal. On the other hand, we don’t propose a particular RA. We only define the scheme of interaction between the two agents and the stimuli of the RA used by the LA. After all, every discourse has in LRA semantics the single meaning and the single sense for every Rational Agent used to interact with the Linguistic Agent.  相似文献   

2.
Operating systems code is often developed according to principles like simplicity, low overhead, and low memory footprint. Schedulers are no exceptions. A scheduler is usually developed with flexibility in mind, and this restricts the ability to provide real-time guarantees. Moreover, even when schedulers can provide real-time guarantees, it is unlikely that these guarantees are properly quantified using theoretical analysis that carries on to the implementation. To be able to analyze the guarantees offered by operating systems’ schedulers, we developed a publicly available tool that analyzes timing properties extracted from the execution of a set of threads and computes the lower and upper bounds to the supply function offered by the execution platform, together with information about migrations and statistics on execution times. rt-muse evaluates the impact of many application and platform characteristics including the scheduling algorithm, the amount of available resources, the usage of shared resources, and the memory access overhead. Using rt-muse, we show the impact of Linux scheduling classes, shared data and application parallelism, on the delivered computing capacity. The tool provides useful insights on the runtime behavior of the applications and scheduler. In the reported experiments, rt-muse detected some issues arising with the real-time Linux scheduler: despite having available cores, Linux does not migrate SCHED_RR threads which are enqueued behind SCHED_FIFO threads with the same priority.  相似文献   

3.
A map is a data structure that is commonly used to store data as key–value pairs and retrieve data as keys, values, or key–value pairs. Although Java offers different map implementation classes, Android SDK offers other implementations supposed to be more efficient than HashMap: ArrayMap and SparseArray variants (SparseArray, LongSparseArray, SparseIntArray, SparseLongArray, and SparseBooleanArray). Yet, the performance of these implementations in terms of CPU time, memory usage, and energy consumption is lacking in the official Android documentation; although saving CPU, memory, and energy is a major concern of users wanting to increase battery life. Consequently, we study the use of map implementations by Android developers in two ways. First, we perform an observational study of 5713 Android apps in GitHub. Second, we conduct a survey to assess developers’ perspective on Java and Android map implementations. Then, we perform an experimental study comparing HashMap, ArrayMap, and SparseArray variants map implementations in terms of CPU time, memory usage, and energy consumption. We conclude with guidelines for choosing among the map implementations: HashMap is preferable over ArrayMap to improve energy efficiency of apps, and SparseArray variants should be used instead of HashMap and ArrayMap when keys are primitive types.  相似文献   

4.
5.
Duration Calculus was introduced in [ZHR91] as a logic to specify and reason about requirements for real-time systems. It is an extension of Interval Temporal Logic [Mos85] where one can reason about integrated constraints over time-dependent and Boolean valued states without explicit mention of absolute time. Several major case studies, e.g. the gas burner system in [RRH93], have shown that Duration Calculus provides a high level of abstraction for both expressing and reasoning about specifications. Using Timed Automata [A1D92] one can express how real-time systems can be constructed at a level of detail which is close to an actual implementation. We consider in the paper the correctness of Timed Automata with respect to Duration Calculus formulae. For a subset of Duration Calculus, we show that one can automatically verify whether a Timed Automaton ? is correct with respect to a formulaD, abbreviated ? ?D, i.e. one can domodel-checking. The subset we consider is expressive enough to formalize the requirements to the gas burner system given in [RRH93]; but only for a discrete time domain. Model-checking is done by reducing the correctness problem ? ?D to the inclusion problem of regular languages.  相似文献   

6.
Wearable apps are becoming increasingly popular in recent years. Nevertheless, to date, very few studies have examined the issues that wearable apps face. Prior studies showed that user reviews contain a plethora of insights that can be used to understand quality issues and help developers build better quality mobile apps. Therefore, in this paper, we mine user reviews in order to understand the user complaints about wearable apps. We manually sample and categorize 2,667 reviews from 19 Android wearable apps. Additionally, we examine the replies posted by developers in response to user complaints. This allows us to determine the type of complaints that developers care about the most, and to identify problems that despite being important to users, do not receive a proper response from developers. Our findings indicate that the most frequent complaints are related to Functional Errors, Cost, and Lack of Functionality, whereas the most negatively impacting complaints are related to Installation Problems, Device Compatibility, and Privacy & Ethical Issues. We also find that developers mostly reply to complaints related to Privacy & Ethical Issues, Performance Issues, and notification related issues. Furthermore, we observe that when developers reply, they tend to provide a solution, request more details, or let the user know that they are working on a solution. Lastly, we compare our findings on wearable apps with the study done by Khalid et al. (2015) on handheld devices. From this, we find that some complaint types that appear in handheld apps also appear in wearable apps; though wearable apps have unique issues related to Lack of Functionality, Installation Problems, Connection & Sync, Spam Notifications, and Missing Notifications. Our results highlight the issues that users of wearable apps face the most, and the issues to which developers should pay additional attention to due to their negative impact.  相似文献   

7.
The Meteor metric for automatic evaluation of machine translation   总被引:1,自引:1,他引:0  
The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM’s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric’s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years.  相似文献   

8.
An algorithm (called FTM) for scheduling of real-time sporadic tasks on a multicore platform is proposed. Each task has a deadline by which it must complete its non-erroneous execution. The FTM algorithm executes backups in order to recover from errors caused by non-permanent and permanent hardware faults. The worst-case schedulability analysis of FTM algorithm is presented considering an application-level error model, which is independent of the stochastic behavior of the underlying hardware-level fault model. Then, the stochastic behavior of hardware-level fault model is plugged in to the analysis to derive the probability of meeting all the deadlines. Such probabilistic guarantee is the level of assurance (i.e., reliability) regarding the correct functional and timing behaviors of the system. One of the salient features of FTM algorithm is that it executes some backups in active redundancy to exploit the parallel multicore architecture while other backups passively to avoid unnecessary execution of too many active backups. This paper also proposes a scheme to determine for each task the number of backups that should run in active redundancy in order to increase the probability of meeting all the deadlines. The effectiveness of the proposed approach is demonstrated using an example application.  相似文献   

9.
In this article, a filter feature weighting technique for attribute selection in classification problems is proposed (LIA). It has two main characteristics. First, unlike feature weighting methods, it is able to consider attribute interactions in the weighting process, rather than only evaluating single features. Attribute subsets are evaluated by projecting instances into a grid defined by attributes in the subset. Then, the joint relevance of the subset is computed by measuring the information present in the cells of the grid. The final weight for each attribute is computed by taking into account its performance in each of the grids it participates. Second, many real problems contain low signal-to-noise ratios, due to instance of high noise levels, class overlap, class imbalance, or small training samples. LIA computes reliable local information for each of the cells by estimating the number of target class instances not due to chance, given a confidence value. In order to study its properties, LIA has been evaluated with a collection of 18 real datasets and compared to two feature weighting methods (Chi-Squared and ReliefF) and a subset feature selection algorithm (CFS). Results show that the method is significantly better in many cases, and never significantly worse. LIA has also been tested with different grid dimensions (1, 2, and 3). The method works best when evaluating attribute subsets larger than 1, hence showing the usefulness of considering attribute interactions.  相似文献   

10.
Architecture analysis & design language (AADL) has been increasingly adopted in the design of embedded systems, and corresponding scheduling and formal verification have been well studied. However, little work takes code distribution and architecture exploration into account, particularly considering clock constraints, for distributed multi-processor systems. In this paper, we present an overview of our approach to handle these concerns, together with the associated toolchain, AADL-Polychrony-SynDEx. First, in order to avoid semantic ambiguities of AADL, the polychronous/multiclock semantics of AADL, based on a polychronous model of computation, is considered. Clock synthesis is then carried out in Polychrony, which bridges the gap between the polychronous semantics and the synchronous semantics of SynDEx. The same timing semantics is always preserved in order to ensure the correctness of the transformations between different formalisms. Code distribution and corresponding scheduling is carried out on the obtained SynDEx model in the last step, which enables the exploration of architectures originally specified in AADL. Our contribution provides a fast yet efficient architecture exploration approach for the design of distributed real-time and embedded systems. An avionic case study is used here to illustrate our approach.  相似文献   

11.
Software developers rely on a build system to compile their source code changes and produce deliverables for testing and deployment. Since the full build of large software systems can take hours, the incremental build is a cornerstone of modern build systems. Incremental builds should only recompile deliverables whose dependencies have been changed by a developer. However, in many organizations, such dependencies still are identified by build rules that are specified and maintained (mostly) manually, typically using technologies like make. Incomplete rules lead to unspecified dependencies that can prevent certain deliverables from being rebuilt, yielding incomplete results, which leave sources and deliverables out-of-sync. In this paper, we present a case study on unspecified dependencies in the make-based build systems of the glib, openldap, linux and qt open source projects. To uncover unspecified dependencies in make-based build systems, we use an approach that combines a conceptual model of the dependencies specified in the build system with a concrete model of the files and processes that are actually exercised during the build. Our approach provides an overview of the dependencies that are used throughout the build system and reveals unspecified dependencies that are not yet expressed in the build system rules. During our analysis, we find that unspecified dependencies are common. We identify 6 common causes in more than 1.2 million unspecified dependencies.  相似文献   

12.
Human experts as well as autonomous agents in a referral network must decide whether to accept a task or refer to a more appropriate expert, and if so to whom. In order for the referral network to improve over time, the experts must learn to estimate the topical expertise of other experts. This article extends concepts from Multi-agent Reinforcement Learning and Active Learning to referral networks for distributed learning in referral networks. Among a wide array of algorithms evaluated, Distributed Interval Estimation Learning (DIEL), based on Interval Estimation Learning, was found to be superior for learning appropriate referral choices, compared to ??-Greedy, Q-learning, Thompson Sampling and Upper Confidence Bound (UCB) methods. In addition to a synthetic data set, we compare the performance of the stronger learning-to-refer algorithms on a referral network of high-performance Stochastic Local Search (SLS) SAT solvers where expertise does not obey any known parameterized distribution. An evaluation of overall network performance and a robustness analysis is conducted across the learning algorithms, with an emphasis on capacity constraints and evolving networks, where experts with known expertise drop off and new experts of unknown performance enter — situations that arise in real-world scenarios but were heretofore ignored.  相似文献   

13.
Source code terms such as method names and variable types are often different from conceptual words mentioned in a search query. This vocabulary mismatch problem can make code search inefficient. In this paper, we present COde voCABUlary (CoCaBu), an approach to resolving the vocabulary mismatch problem when dealing with free-form code search queries. Our approach leverages common developer questions and the associated expert answers to augment user queries with the relevant, but missing, structural code entities in order to improve the performance of matching relevant code examples within large code repositories. To instantiate this approach, we build GitSearch, a code search engine, on top of GitHub and Stack Overflow Q&A data. We evaluate GitSearch in several dimensions to demonstrate that (1) its code search results are correct with respect to user-accepted answers; (2) the results are qualitatively better than those of existing Internet-scale code search engines; (3) our engine is competitive against web search engines, such as Google, in helping users solve programming tasks; and (4) GitSearch provides code examples that are acceptable or interesting to the community as answers for Stack Overflow questions.  相似文献   

14.
We investigate cellular automata as acceptors for formal languages. In particular, we consider real-time one-way cellular automata (\(\text{OCA}\)) with the additional property that during a computation any cell of the \(\text{OCA}\) has the ability to dissolve itself, so-called shrinking one-way cellular automata (\(\text{SOCA}\)). It turns out that real-time \(\text{SOCA}\) are more powerful than real-time \(\text{OCA}\), since they can accept certain linear-time \(\text{OCA}\) languages. On the other hand, linear-time \(\text{OCA}\) are more powerful than real-time \(\text{SOCA}\), which is witnessed even by a unary language. Additionally, a construction is provided that enables real-time \(\text{SOCA}\) to accept the reversal of real-time iterative array languages. Finally, restricted real-time \(\text{SOCA}\) are investigated. We distinguish two limitations for the dissolving of cells. One restriction is to bound the total number of cells that are allowed to dissolve by some function. In this case, an infinite strict hierarchy of language classes is obtained. The second restriction is inspired by an approach to limit the amount of nondeterminism in \(\text{OCA}\). Compared with the first restriction, the total number of cells that may dissolve is still unbounded, but the number of time steps at which a cell may dissolve is bounded. The possibility to dissolve is allowed only in the first k time steps, where \(k\ge 0\) is some constant. For this mode of operation an infinite, tight, and strict hierarchy of language classes is obtained as well.  相似文献   

15.
ABSTRACT

Software security helps in identifying and managing risks. One of the effective ways to identify software vulnerabilities is to analyze its code. Code analysis (Chess & West, 2007 Chess, B. and West, J. 2007. Secure programming with static analysis, Boston, MA: Addison-Wesley.  [Google Scholar]) helps in catching common coding mistakes such as buffer overflow, unused variables, memory leaks, and various race conditions, which in turn optimizes computer programs, both in storage and computation aspects. Software developers use either open source tools or commercial tools for verification and validation of software. Without proper validation of a software/system using some standard guidelines, potential attackers can find ways to exploit vulnerabilities and bugs and then can gain control over a system, if they are successful. In this paper, we discuss some of the open source static code analysis and dynamic analysis tools, their merits, and limitations with respect to some target codes that contain possible threats. We consider C/C++ and Java programming languages for our experiments. For static code analyzers, we consider Flawfinder, Splint, and Cppcheck; PMD, Findbugs, and Valgrind for dynamic code analysis, and its plug-in, Memcheck, to perform dynamic analysis on executables. We provide our observations in a comparison table, highlighting these tools strengths and weaknesses.  相似文献   

16.
Ensembles of classifiers are among the best performing classifiers available in many data mining applications, including the mining of data streams. Rather than training one classifier, multiple classifiers are trained, and their predictions are combined according to a given voting schedule. An important prerequisite for ensembles to be successful is that the individual models are diverse. One way to vastly increase the diversity among the models is to build an heterogeneous ensemble, comprised of fundamentally different model types. However, most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier, most often Hoeffding Trees. We study the use of heterogeneous ensembles for data streams. We introduce the Online Performance Estimation framework, which dynamically weights the votes of individual classifiers in an ensemble. Using an internal evaluation on recent training data, it measures how well ensemble members performed on this and dynamically updates their weights. Experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques, including Online Bagging and Leveraging Bagging, while being significantly faster. All experimental results from this work are easily reproducible and publicly available online.  相似文献   

17.
We propose a formal semantics for UML-RT, a UML profile for real-time and embedded systems. The formal semantics is given by mapping UML-RT models into a language called kiltera, a real-time extension of the \(\pi \)-calculus. Previous attempts to formalize the semantics of UML-RT have fallen short by considering only a very small subset of the language and providing fundamentally incomplete semantics based on incorrect assumptions, such as a one-to-one correspondence between “capsules” and threads. Our semantics is novel in several ways: (1) it deals with both state machine diagrams and capsule diagrams; (2) it deals with aspects of UML-RT that have not been formalized before, such as thread allocation, service provision points, and service access points; (3) it supports an action language; and (4) the translation has been implemented in the form of a transformation from UML-RT models created with IBM’s RSA-RTE tool, into kiltera code. To our knowledge, this is the most comprehensive formal semantics for UML-RT to date.  相似文献   

18.
ReFlO is a framework and interactive tool to record and systematize domain knowledge used by experts to derive complex pipe-and-filter (PnF) applications. Domain knowledge is encoded as transformations that alter PnF graphs by refinement (adding more details), flattening (removing modular boundaries), and optimization (substituting inefficient PnF graphs with more efficient ones). All three kinds of transformations arise in reverse-engineering legacy PnF applications. We present the conceptual foundation and tool capabilities of ReFlO, illustrate how parallel PnF applications are designed and generated, and how domain-specific libraries of transformations are developed.  相似文献   

19.
The skyline operator determines points in a multidimensional dataset that offer some optimal trade-off. State-of-the-art CPU skyline algorithms exploit quad-tree partitioning with complex branching to minimise the number of point-to-point comparisons. Branch-phobic GPU skyline algorithms rely on compute throughput rather than partitioning, but fail to match the performance of sequential algorithms. In this paper, we introduce a new skyline algorithm, SkyAlign, that is designed for the GPU, and a GPU-friendly, grid-based tree structure upon which the algorithm relies. The search tree allows us to dramatically reduce the amount of work done by the GPU algorithm by avoiding most point-to-point comparisons at the cost of some compute throughput. This trade-off allows SkyAlign to achieve orders of magnitude faster performance than its predecessors. Moreover, a NUMA-oblivious port of SkyAlign outperforms native multicore state of the art on challenging workloads by an increasing margin as more cores and sockets are utilised.  相似文献   

20.
A Physical Error Estimation Tool (PEET) is introduced in Matlab for predicting physical gate errors of quantum information processing (QIP) operations by constructing and then simulating gate sequences for a wide variety of user-defined, Hamiltonian-based physical systems. PEET is designed to accommodate the interdisciplinary needs of quantum computing design by assessing gate performance for users familiar with the underlying physics of QIP, as well as those interested in higher-level computing operations. The structure of PEET separates the bulk of the physical details of a system into Gate objects, while the construction of quantum computing gate operations are contained in GateSequence objects. Gate errors are estimated by Monte Carlo sampling of noisy gate operations. The main utility of PEET, though, is the implementation of QuantumControl methods that act to generate and then test gate sequence and pulse-shaping techniques for QIP performance. This work details the structure of PEET and gives instructive examples for its operation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号