首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Esterel is a design language for the specification of real time embedded systems. Based on the synchronous concurrency paradigm, its semantics describes execution as a succession of instants of computation. In this work, we consider the introduction of a new gotopause instruction in the language, which acts as a non-instantaneous jump instruction compatible with concurrency. It allows the programmer to activate state control points anywhere in the program, from where the execution is resumed in the next instant. In order to provide the formal semantics of the extended language, we first define a state semantics of Esterel, which we prove observationally equivalent to the original logical behavioral semantics. Including gotopause in the state semantics is then straightforward. We sketch two key applications of our new primitive: a direct encoding of automata and a quasi-linear rewriting of programs eliminating schizophrenic behaviors.  相似文献   

2.
The paradigm of service-oriented computing revolutionized the field of software engineering. According to this paradigm, new systems are composed of existing stand-alone services to support complex cross-organizational business processes. Correct communication of these services is not possible without a proper coordination mechanism. The Reo coordination language is a channel-based modeling language that introduces various types of channels and their composition rules. By composing Reo channels, one can specify Reo connectors that realize arbitrary complex behavioral protocols. Several formalisms have been introduced to give semantics to Reo. In their most basic form, they reflect service synchronization and dataflow constraints imposed by connectors. To ensure that the composed system behaves as intended, we need a wide range of automated verification tools to assist service composition designers. In this paper, we present our framework for the verification of Reo using the mCRL2{{\tt mCRL2}} toolset. We unify our previous work on mapping various semantic models for Reo, namely, constraint automata, timed constraint automata, coloring semantics and the newly developed action constraint automata, to the process algebraic specification language of mCRL2{{\tt mCRL2}}, address the correctness of this mapping, discuss tool support, and present a detailed example that illustrates the use of Reo empowered with mCRL2{{\tt mCRL2}} for the analysis of dataflow in service-based process models.  相似文献   

3.
Program slicing is an effective technique for analyzing concurrent programs. However, when a conventional closure-based slicing algorithmfor sequential programs is applied to a concurrent interprocedural program, the slice is usually imprecise owing to the intransitivity of interference dependence. Interference dependence arises when a statement uses a variable defined in another statement executed concurrently. In this study, we propose a global dependence analysis approach based on a program reachability graph, and construct a novel dependence graph calledmarking-statement dependence graph (MSDG), in which each vertex is a 2-tuple of program state and statement. In contrast to the conventional program dependence graph where the vertex is a statement, the dependence relation in MSDG is transitive. When traversing MSDG, a precise slice will be obtained. To enhance the slicing efficiency without loss of precision, our slicing algorithm adopts a hybrid strategy. The procedures containing interaction statements between threads are inlined and sliced by the slicing algorithm based on program reachability graphs while allowing other procedures to be sliced as sequential programs. We have implemented our algorithm and three other representative slicing algorithms, and conducted an empirical study on concurrent Java programs. The experimental results show that our algorithm computes more precise slices than the other algorithms. Using partial-order reduction techniques, which are effective for reducing the size of a program reachability graph without loss of precision, our algorithm is optimized, thereby improving its performance to some extent.  相似文献   

4.
Program slicing is a technique by which statements are deleted from a program in such a way as to preserve a projection of the original program's semantics. It is shown that slicing algorithms based upon traditional defined and referenced variable sets do not preserve a projection of strict semantics with respect to computations which cause errors. Rather, these approaches preserve a projection of the program's semantics which is lazy with respect to errors. A modified version of defined and referenced variable sets is introduced, which provides the freedom to choose the form of semantics to be preserved.In describing a slicing criterion it is conventional to label program points with line numbers. These line numbers are unique identifiers; one for each node in the program's Control Flow Graph [FOW87].  相似文献   

5.
A program schema defines a class of programs, all of which have identical statement structure, but whose functions and predicates may differ. A schema thus defines an entire class of programs according to how its symbols are interpreted. A subschema of a schema is obtained from a schema by deleting some of its statements. We prove that given a schema S which is predicate-linear, free and liberal, such that the true and false parts of every if predicate satisfy a simple additional condition, and a slicing criterion defined by the final value of a given variable after execution of any program defined by S, the minimal subschema of S which respects this slicing criterion contains all the function and predicate symbols ‘needed’ by the variable according to the data dependence and control dependence relations used in program slicing, which is the symbol set given by Weiser’s static slicing algorithm. Thus this algorithm gives predicate-minimal slices for classes of programs represented by schemas satisfying our set of conditions. We also give an example to show that the corresponding result with respect to the slicing criterion defined by termination behaviour is incorrect. This complements a result by the authors in which S was required to be function-linear, instead of predicate-linear.  相似文献   

6.
We define a program semantics that is preserved by dependence-based slicing algorithms. It is a natural extension, to non-terminating programs, of the semantics introduced by Weiser (which only considered terminating ones) and, as such, is an accurate characterisation of the semantic relationship between a program and the slice produced by these algorithms.Unlike other approaches, apart from Weiser’s original one, it is based on strict standard semantics which models the ‘normal’ execution of programs on a von Neumann machine and, thus, has the advantage of being intuitive. This is essential since one of the main applications of slicing is program comprehension. Although our semantics handles non-termination, it is defined wholly in terms of finite trajectories, without having to resort to complex, counter-intuitive, non-standard models of computation. As well as being simpler, unlike other approaches to this problem, our semantics is substitutive. Substitutivity is an important property because it greatly enhances the ability to reason about correctness of meaning-preserving program transformations such as slicing.  相似文献   

7.
The PiDuce project comprises a programming language and a distributed runtime environment devised for experimenting Web services technologies by relying on solid theories about process calculi and formal languages for XML documents and schemas.The language features values and datatypes that extend XML documents and schemas with channels, an expressive type system with subtyping, a pattern matching mechanism for deconstructing XML values, and control constructs that are based on Milner’s asynchronous pi calculus. The runtime environment supports the execution of PiDuce processes over networks by relying on state-of-the-art technologies, such as XML schema and WSDL, thus enabling interoperability with existing Web services.We thoroughly describe the PiDuce project: the programming language and its semantics, the architecture of the distributed runtime and its implementation.  相似文献   

8.
This paper is an in-depth study of qualitative physical reasoning about one particular scenario: using a box to carry a collection of objects from one place to another. Specifically we consider the plan, plan1 “Load objects uCargo into box oBox one by one; carry oBox from location l1 to location l2”. We present qualitative constraints on the shape, starting position, and material properties of uCargo and oBox and on the characteristics of the motion that suffice to make it virtually certain that plan1 can be successfully executed. We develop a theory, consisting mostly of first-order statements together with two default rules, that supports an inference of the form “If conditions XYZ hold, and the agent attempts to carry out plan1 then presumably he will succeed”. Our theory is elaboration tolerant in the sense that carrying out the analogous inference for carrying objects in boxes with lids, in boxes with small holes, or on trays can reuse much of the same knowledge. The theory integrates reasoning about continuous time, Euclidean space, commonsense dynamics of solid objects, and semantics of partially specified plans.  相似文献   

9.
In the scientific community, feature models are the de-facto standard for representing variability in software product line engineering. This is different from industrial settings where they appear to be used much less frequently. We and other authors found that in a number of cases, they lack concision, naturalness and expressiveness. This is confirmed by industrial experience.When modelling variability, an efficient tool for making models intuitive and concise are feature attributes. Yet, the semantics of feature models with attributes is not well understood and most existing notations do not support them at all. Furthermore, the graphical nature of feature models’ syntax also appears to be a barrier to industrial adoption, both psychological and rational. Existing tool support for graphical feature models is lacking or inadequate, and inferior in many regards to tool support for text-based formats.To overcome these shortcomings, we designed TVL, a text-based feature modelling language. In terms of expressiveness, TVL subsumes most existing dialects. The main goal of designing TVL was to provide engineers with a human-readable language with a rich syntax to make modelling easy and models natural, but also with a formal semantics to avoid ambiguity and allow powerful automation.  相似文献   

10.
一种基于模块单子语义的动态程序切片方法   总被引:2,自引:0,他引:2  
提出一种基于程序模块单子语义的新动态切片方法--模块单子动态切片.首先通过单子转换器,将切片这一类计算抽象成独立于具体语言的实体:切片单子转换器.然后,将该切片转换器作为模块加载到实际程序中,并给出相应的模块单子动态切片算法.据此,可直接在抽象语法结构上计算动态切片,不必记录程序执行历史;相应单子切片器也无需显式地构造诸如依赖图的中间结构.这种模块化抽象机制使得文中的动态切片算法具有很强的可扩展性和重用性.  相似文献   

11.
Multiple dispatch-the selection of a function to be invoked based on the dynamic type of two or more arguments-is a solution to several classical problems in object-oriented programming. Open multi-methods generalize multiple dispatch towards open-class extensions, which improve separation of concerns and provisions for retroactive design. We present the rationale, design, implementation, performance, programming guidelines, and experiences of working with a language feature, called open multi-methods, for C++. Our open multi-methods support both repeated and virtual inheritance. Our call resolution rules generalize both virtual function dispatch and overload resolution semantics. After using all information from argument types, these rules can resolve further ambiguities by using covariant return types. Care was taken to integrate open multi-methods with existing C++ language features and rules. We describe a model implementation and compare its performance and space requirements to existing open multi-method extensions and work-around techniques for C++. Compared to these techniques, our approach is simpler to use, catches more user mistakes, and resolves more ambiguities through link-time analysis, is comparable in memory usage, and runs significantly faster. In particular, the runtime cost of calling an open multi-method is constant and less than the cost of a double dispatch (two virtual function calls). Finally, we provide a sketch of a design for open multi-methods in the presence of dynamic loading and linking of libraries.  相似文献   

12.
A program schema defines a class of programs, all of which have identical statement structure, but whose functions and predicates may differ. A schema thus defines an entire class of programs according to how its symbols are interpreted. As defined in this paper, a slice of a schema is obtained from a schema by deleting some of its statements. We prove that given a schema S which is function-linear, free and liberal, and a slicing criterion defined by the final value of a given variable after execution of any program defined by S, the minimal slice of S which respects this slicing criterion contains only the symbols ‘needed’ by the variable according to the data dependence and control dependence relations used in program slicing, which is the symbol set given by Weiser’s static slicing algorithm. Thus this algorithm gives minimal slices for programs representable by function-linear, free, liberal schemas. We also prove a similar result with termination behaviour used as a slicing criterion. This strengthens a recent result, in which S was required to be linear, free and liberal, and termination behaviour as a slicing criterion was not considered.  相似文献   

13.
Slicing is a program analysis technique which can be used for reducing the size of the model and avoid state space explosion in model checking. In this work a static slicing technique is proposed for reducing Rebeca models with respect to a property. For applying the actor-based slicing techniques, the Rebeca control flow graph (RCFG) and the Rebeca dependence graph (RDG) are introduced. We propose two different approaches for constructing the RDG, where each approach can be more effective under certain conditions. As the static slicing usually produces large slices, two other slicing-based reduction techniques, step-wise slicing and bounded slicing, are proposed as simple novel ideas. Step-wise slicing first generates slices that overapproximate the behavior of the original model and then refines it, and bounded slicing is based on the semantics of nondeterministic assignments in Rebeca. We also propose a static slicing algorithm for deadlock detection (in absence of any particular property). The efficiency of these techniques is checked by applying them to several case studies which are included in this paper. Similar techniques can be applied on the other actor-based languages.  相似文献   

14.
15.
16.
Within the MNPBEM toolbox, we show how to simulate electron energy loss spectroscopy (EELS) of plasmonic nanoparticles using a boundary element method approach. The methodology underlying our approach closely follows the concepts developed by García de Abajo and coworkers (Garcia de Abajo, 2010). We introduce two classes eelsret and eelsstat that allow in combination with our recently developed MNPBEM toolbox for a simple, robust, and efficient computation of EEL spectra and maps. The classes are accompanied by a number of demo programs for EELS simulation of metallic nanospheres, nanodisks, and nanotriangles, and for electron trajectories passing by or penetrating through the metallic nanoparticles. We also discuss how to compute electric fields induced by the electron beam and cathodoluminescence.  相似文献   

17.
Discrete classification problems abound in pattern recognition and data mining applications. One of the most common discrete rules is the discrete histogram rule. This paper presents exact formulas for the computation of bias, variance, and RMS of the resubstitution and leave-one-out error estimators, for the discrete histogram rule. We also describe an algorithm to compute the exact probability distribution of resubstitution and leave-one-out, as well as their deviations from the true error rate. Using a parametric Zipf model, we compute the exact performance of resubstitution and leave-one-out, for varying expected true error, number of samples, and classifier complexity (number of bins). We compare this to approximate performance measures-computed by Monte-Carlo sampling—of 10-repeated 4-fold cross-validation and the 0.632 bootstrap error estimator. Our results show that resubstitution is low-biased but much less variable than leave-one-out, and is effectively the superior error estimator between the two, provided classifier complexity is low. In addition, our results indicate that the overall performance of resubstitution, as measured by the RMS, can be substantially better than the 10-repeated 4-fold cross-validation estimator, and even comparable to the 0.632 bootstrap estimator, provided that classifier complexity is low and the expected error rates are moderate. In addition to the results discussed in the paper, we provide an extensive set of plots that can be accessed on a companion website, at the URL http://ee.tamu.edu/edward/exact_discrete.  相似文献   

18.
We propose a novel dynamic program slicing technique for concurrent object-oriented programs. Our technique uses a Concurrent System Dependence Graph (CSDG) as the intermediate program representation. We mark and unmark the edges in the CSDG appropriately as and when the dependencies arise and cease during run-time. We mark an edge when its associated dependence exists and unmark an edge when the dependence ceases to exist. Our approach eliminates the use of trace files. Another advantage of our approach is that when a request for a slice is made, it is already available. This appreciably reduces the response time of slicing commands.  相似文献   

19.
A cryptographic protocol is a distributed program that can be executed by several actors. Since several runs of the protocol within the same execution are allowed, models of cryptoprotocols are often infinite. Sometimes, for verification purposes, only a finite and approximated model is needed. For this, we consider the problem of computing such an approximation and we propose to simulate the required partial execution in an abstract level. More precisely, we define an abstract finite category G a as an abstract game semantics for the SPC calculus, a dedicated calculus for security protocols. The abstract semantics is then used to build a decision procedure for secrecy correctness in security protocols.  相似文献   

20.
We present a software package that guesses formulae for sequences of, for example, rational numbers or rational functions, given the first few terms. We implement an algorithm due to Bernhard Beckermann and George Labahn, together with some enhancements to render our package efficient. Thus we extend and complement Christian Krattenthaler’s program Rate.m, the parts concerned with guessing of Bruno Salvy and Paul Zimmermann’s GFUN, the univariate case of Manuel Kauers’ Guess.m and Manuel Kauers’ and Christoph Koutschan’s qGeneratingFunctions.m.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号