首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 797 毫秒
1.
The uml Profile for Modeling and Analysis of Real-Time and Embedded (RTE) systems has recently been adopted by the OMG. Its Time Model extends the informal and simplistic Simple Time package proposed by Unified Modeling Language (UML2) and offers a broad range of capabilities required to model RTE systems including discrete/dense and chronometric/logical time. The Marte specification introduces a Time Structure inspired from several time models of the concurrency theory and proposes a new clock constraint specification language (ccsl) to specify, within the context of the uml, logical and chronometric time constraints. A semantic model in ccsl is attached to a (uml) model to give its timed causality semantics. In that sense, ccsl is comparable to the Ptolemy environment, in which directors give the semantics to models according to predefined models of computation and communication. This paper focuses on one historical model of computation of Ptolemy [Synchronous Data Flow (SDF)] and shows how to build SDF graphs by combining uml models and ccsl.  相似文献   

2.
The theory of hybrid systems is well-established as a model for real-world systems consisting of continuous behaviour and discrete control. In practice, the behaviour of such systems is also subject to uncertainties, such as measurement errors, or is controlled by randomised algorithms. These aspects can be modelled and analysed using stochastic hybrid systems. In this paper, we present HModest, an extension to the Modest modelling language—which is originally designed for stochastic timed systems without complex continuous aspects—that adds differential equations and inclusions as an expressive way to describe the continuous system evolution. Modest is a high-level language inspired by classical process algebras, thus compositional modelling is an integral feature. We define the syntax and semantics of HModest and show that it is a conservative extension of Modest that retains the compositional modelling approach. To allow the analysis of HModest models, we report on the implementation of a connection to recently developed tools for the safety verification of stochastic hybrid systems, and illustrate the language and the tool support with a set of small, but instructive case studies.  相似文献   

3.
A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.  相似文献   

4.
The adoption of Artificial Neural Networks (ANNs) in safety-related applications is often avoided because it is difficult to rule out possible misbehaviors with traditional analytical or probabilistic techniques. In this paper we present NeVer, our tool for checking safety of ANNs. NeVer encodes the problem of verifying safety of ANNs into the problem of satisfying corresponding Boolean combinations of linear arithmetic constraints. We describe the main verification algorithm and the structure of NeVer. We present also empirical results confirming the effectiveness of NeVer on realistic case studies.  相似文献   

5.
Architecture analysis & design language (AADL) has been increasingly adopted in the design of embedded systems, and corresponding scheduling and formal verification have been well studied. However, little work takes code distribution and architecture exploration into account, particularly considering clock constraints, for distributed multi-processor systems. In this paper, we present an overview of our approach to handle these concerns, together with the associated toolchain, AADL-Polychrony-SynDEx. First, in order to avoid semantic ambiguities of AADL, the polychronous/multiclock semantics of AADL, based on a polychronous model of computation, is considered. Clock synthesis is then carried out in Polychrony, which bridges the gap between the polychronous semantics and the synchronous semantics of SynDEx. The same timing semantics is always preserved in order to ensure the correctness of the transformations between different formalisms. Code distribution and corresponding scheduling is carried out on the obtained SynDEx model in the last step, which enables the exploration of architectures originally specified in AADL. Our contribution provides a fast yet efficient architecture exploration approach for the design of distributed real-time and embedded systems. An avionic case study is used here to illustrate our approach.  相似文献   

6.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

7.
We propose a formal approach for the definition and analysis of domain-specific modelling languages (dsml). The approach uses standard model-driven engineering artifacts for defining a language’s syntax (using metamodels) and its operational semantics (using model transformations). We give formal meanings to these artifacts by translating them to the Maude language: metamodels and models are mapped to equational specifications, and model transformations are mapped to rewrite rules between such specifications, which are also expressible in Maude due to Maude’s reflective capabilities. These mappings provide us, on the one hand, with abstract definitions of the mde concepts used for defining dsml, which naturally capture their intended meanings; and, on the other hand, with equivalent executable definitions, which can be directly used by Maude for formal verification. We also study a notion of operational semantics-preserving model transformations, which are model transformations between two dsml that ensure that each execution of a transformed instance is matched by an execution of the original instance. We propose a semi-decision procedure, implemented in Maude, for checking the semantics-preserving property. We also show how the procedure can be adapted for tracing finite executions of the transformed instance back to matching executions of the original one. The approach is illustrated on xspem, a language for describing the execution of activities constrained by time, precedence, and resource availability.  相似文献   

8.
Reasoning about the termination of equational programs in sophisticated equational languages such as Elan, Maude, OBJ, CafeOBJ, Haskell, and so on, requires support for advanced features such as evaluation strategies, rewriting modulo, use of extra variables in conditions, partiality, and expressive type systems (possibly including polymorphism and higher-order). However, many of those features are, at best, only partially supported by current term rewriting termination tools (for instance mu-term, C i ME, AProVE, TTT, Termptation, etc.) while they may be essential to ensure termination. We present a sequence of theory transformations that can be used to bridge the gap between expressive membership equational programs and such termination tools, and prove the correctness of such transformations. We also discuss a prototype tool performing the transformations on Maude equational programs and sending the resulting transformed theories to some of the aforementioned standard termination tools.  相似文献   

9.
10.
We present a specification theory for timed systems implemented in the Ecdar tool. We illustrate the operations of the specification theory on a running example, showing the models and verification checks. To demonstrate the power of the compositional verification, we perform an in depth case study of a leader election protocol; Modeling it in Ecdar as Timed input/output automata Specifications and performing both monolithic and compositional verification of two interesting properties on it. We compare the execution time of the compositional to the classical verification showing a huge difference in favor of compositional verification.  相似文献   

11.
S-Net is a declarative coordination language and component technology aimed at radically facilitating software engineering for modern parallel compute systems by near-complete separation of concerns between application (component) engineering and concurrency orchestration. S-Net builds on the concept of stream processing to structure networks of communicating asynchronous components implemented in a conventional (sequential) language. In this paper we present the design, implementation and evaluation of a new and innovative runtime system for S-Net streaming networks. The Front runtime system outperforms the existing implementations of S-Net by orders of magnitude for stress-test benchmarks, significantly reduces runtimes of fully-fledged parallel applications with compute-intensive components and achieves good scalability on our 48-core test system.  相似文献   

12.
Automata have proved to be a useful tool in infinite-state model checking, since they can represent infinite sets of integers and reals. However, analogous to the use of binary decision diagrams (bdds) to represent finite sets, the sizes of the automata are an obstacle in the automata-based set representation. In this article, we generalize the notion of “don’t cares” for bdds to word languages as a means to reduce the automata sizes. We show that the minimal weak deterministic Büchi automaton (wdba) with respect to a given don’t care set, under certain restrictions, is uniquely determined and can be efficiently constructed. We apply don’t cares to improve the efficiency of a decision procedure for the first-order logic over the mixed linear arithmetic over the integers and the reals based on wdbas.  相似文献   

13.
14.
Developing tools that are able to perform automatic verification on realistic models of software systems is one of the main challenges facing the formal methods community. We briefly review the research area and introduce three papers selected from the Seventeenth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (tacas 2011).  相似文献   

15.
In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

16.
17.
In this paper, we study the relation among Answer Set Programming (ASP) systems from a computational point of view. We consider smodels, dlv, and cmodels ASP systems based on stable model semantics, the first two being native ASP systems and the last being a SAT-based system. We first show that smodels, dlv, and cmodels explore search trees with the same branching nodes (assuming, of course, a same branching heuristic) on the class of tight logic programs. Leveraging on the fact that SAT-based systems rely on the deeply studied Davis–Logemann–Loveland (dll) algorithm, we derive new complexity results for the ASP procedures. We also show that on nontight programs the SAT-based systems are computationally different from native procedures, and the latter have computational advantages. Moreover, we show that native procedures can guarantee the “correctness” of a reported solution when reaching the leaves of the search trees (i.e., no stability check is needed), while this is not the case for SAT-based procedures on nontight programs. A similar advantage holds for dlv in comparison with smodels if the “well-founded” operator is disabled and only Fitting’s operator is used for negative inferences. We finally study the “cost” of achieving such advantages and comment on to what extent the results presented extend to other systems.  相似文献   

18.
This paper presents a distributed (Bulk-Synchronous Parallel or bsp) algorithm to compute on-the-fly whether a structured model of a security protocol satisfies a ctl \(^*\) formula. Using the structured nature of the security protocols allows us to design a simple method to distribute the state space under consideration in a need-driven fashion. Based on this distribution of the states, the algorithm for logical checking of a ltl formula can be simplified and optimised allowing, with few tricky modifications, the design of an efficient algorithm for ctl \(^*\) checking. Some prototype implementations have been developed, allowing to run benchmarks to investigate the parallel behaviour of our algorithms.  相似文献   

19.
A DIN Kernel LISP Draft (DKLisp) has been developed by DIN as Reaction to Action D1 (N79), short term goal, of ISO WG16. It defines a subset language, as compatible as possible with the ANSICommon-Lisp draft, but also with theEuLisp draft. It combines the most important LISP main stream features in a single, compact, but nevertheless complete language definition, which thereby could be well suited as basis for a short term InternationalLisp Standard. Besides the functional and knowledge processing features, the expressive power of the language is well comparable with contemporary procedural languages, as e.g. C++ (of course without libraries). Important features ofDKLisp are:
  • to be a “Lisp-1,” but allowing an easy “Lisp-2” transformation;
  • to be a simple, powerful and standardized educationalLisp;
  • to omit all features, which are unclean or in heavy discussion;
  • DKLisp programs run nearly unchanged inCommon-Lisp;
  • DKLisp contains a simple object and package system;
  • DKLisp contains those data classes and control structures also common to most modernLisp and non-Lisp languages;
  • DKLisp offers a simple stream I/O;
  • DKLisp contains a clean unified hierarchical class/type system;
  • DKLisp contains the typical “Lisp-features” in an orthogonal way;
  • DKLisp allows and encourages really small but powerful implementations;
  • DKLisp comes in levels, so allowing ANSICommon-Lisp to be an extension ofDKLisp level-1.
  • The present is the second version of the proposal, namely version 1.2, with slight differences with respect to the one sent to ISO. Sources of such changes were the remarks generously sent by many readers of the previous attempt.  相似文献   

    20.
    Kermeta is a meta-language for specifying the structure and behavior of graphs of interconnected objects called models. In this paper, we show that Kermeta is relatively suitable for solving three graph-based problems. First, Kermeta allows the specification of generic model transformations such as refactorings that we apply to different metamodels including Ecore, Java, and Uml. Second, we demonstrate the extensibility of Kermeta to the formal language Alloy using an inter-language model transformation. Kermeta uses Alloy to generate recommendations for completing partially specified models. Third, we show that the Kermeta compiler achieves better execution time and memory performance compared to similar graph-based approaches using a common case study. The three solutions proposed for those graph-based problems and their evaluation with Kermeta according to the criteria of genericity, extensibility, and performance are the main contribution of the paper. Another contribution is the comparison of these solutions with those proposed by other graph-based tools.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号