首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Model checking is a particular approach to property verification of systems. One describes a system in a mathematical model, expresses the properties one wishes to verify for the system in a formal language, and then checks whether the model satisfies the formal property. Invented 25 years ago, this approach is fully automatic and has therefore gained wide acceptance and is increasingly being used in commercial research and development units. Impediments remain on the road to successful technology transfer. For one, the size of models often increases exponentially in the number of variables or sub-models, preventing scalable automation. Abstracting a model to reduce its size can be a cost-effective way of addressing this. For another, systems and models may be subject to change, e.g. in an incremental design process. One then seeks cost-effective means of ascertaining that property verifications remain to be valid as models evolve. This special section presents current research on such abstraction and change management of model checking.  相似文献   

2.
Model checkers were originally developed to support the formal verification of high-level design models of distributed system designs. Over the years, they have become unmatched in precision and performance in this domain. Research in model checking has meanwhile moved towards methods that allow us to reason also about implementation level artifacts (e.g., software code) directly, instead of hand-crafted representations of those artifacts. This does not mean that there is no longer a place for the use of high-level models, but it does mean that such models are used in a different way today. In the approach that we describe here, high-level models are used to represent the environment for which the code is to be verified, but not the application itself. The code of the application is now executed as is by the model checker, while using powerful forms of abstraction on-the-fly to build the abstract state space that guides the verification process. This model-driven code checking method allows us to verify implementation level code efficiently for high-level safety and liveness properties. In this paper, we give an overview of the methodology that supports this new paradigm of code verification.  相似文献   

3.
The formal specification component of verification can be exported to simulation through the idea of property checkers. The essence of this approach is the automatic construction of an observer from the specification in the form of a program that can be interfaced with a simulator and alert the user if the property is violated by a simulation trace. Although not complete, this lighter approach to formal verification has been effectively used in software and digital hardware to detect errors. Recently, the idea of property checkers has been extended to analog and mixed-signal systems.  相似文献   

4.
When building dependable systems by integrating untrusted software components that were not originally designed to interact with each other, it is likely the occurrence of architectural mismatches related to assumptions in their failure behaviour. These mismatches, if not prevented during system design, have to be tolerated during runtime. This paper presents an architectural abstraction based on exception handling for structuring fault-tolerant software systems. This abstraction comprises several components and connectors that promote an existing untrusted software element into an idealised fault-tolerant architectural element. Moreover, it is considered in the context of a rigorous software development approach based on formal methods for representing the structure and behaviour of the software architecture. The proposed approach relies on a formal specification and verification for analysing exception propagation, and verifying important dependability properties, such as deadlock freedom, and scenarios of architectural reconfiguration. The formal models are automatically generated using model transformation from UML diagrams: component diagram representing the system structure, and sequence diagrams representing the system behaviour. Finally, the formal models are also used for generating unit and integration test cases that are used for assessing the correctness of the source code. The feasibility of the proposed architectural approach was evaluated on an embedded critical case study. Patrick Brito is supported by Fapesp/Brazil under Grant No. 06/02116–2 and CAPES/Brazil under Grant No. 0722–07–3. Cecília Rubira is partially supported by CNPq/Brazil under Grant Nos. 301446/2006–7 and 484138/2006–5.  相似文献   

5.
Writing requirements in a formal notation permits automatic assessment of such properties as ambiguity, consistency, and completeness. However, verifying that the properties expressed in requirements are preserved in other software life cycle artifacts remains difficult. The existing techniques either require substantial manual effort and skill or suffer from exponential explosion of the number of states in the generated state spaces. “Light-weight” formal methods is an approach to achieve scalability in fully automatic verification by checking an abstraction of the system for only certain properties. We describe light-weight techniques for automatic analysis of consistency between software requirements (expressed in SCR) and detailed designs in low-degree-polynomial time, achieved at the expense of using imprecise data-flow analysis techniques. A specification language SCR describes the systems as state machines with event-driven transitions. We define detailed designs to be consistent with their SCR requirements if they contain exactly the same transitions. We have developed a language for specifying detailed designs, an analysis technique to create a model of a design through data-flow analysis of the language constructs, and a method to automatically generate and check properties derived from requirements to ensure a design's consistency with them. These ideas are implemented in a tool named CORD, which we used to uncover errors in designs of some existing systems  相似文献   

6.
Approaches to access control (AC) policy languages, such as eXtensible access control markup language, do not provide a formal representation for specifying rule- and policy-combining algorithms or for verifying properties of AC policies. Some authors propose formal representations for these combining algorithms. However, the proposed models are not expressive enough to represent formally history-based classes of these algorithms, such as ordered-permit-overrides. In addition, some other authors propose a formal representation but do not present automated support for formal verification of properties of AC policies that use these algorithms. This paper demonstrates a new representation that can express all existing AC rule and policy combinations of which the authors are aware. This representation can also be used to automate the formal verification of properties of AC policies related to these algorithms. A new modeling representation for rule- and policy-combining algorithms based on state machines is used to specify rule- and policy-combining algorithms. Examples of these algorithms are programmed in the language of the SPIN model checker, and the programs are then used to support the automated formal verification of properties of AC policies. We present our approach and then use the AC policies and properties of CONTINUE, a conference management system, to compare it with prior work. Our first contribution is a new modeling representation for combining algorithms based on state machines. The second contribution is the formal verification of AC properties under certain combining algorithms that are beyond the capability of other approaches.  相似文献   

7.
Formal verification of complex algorithms is challenging. Verifying their implementations goes beyond the state of the art of current automatic verification tools and usually involves intricate mathematical theorems. Certifying algorithms compute in addition to each output a witness certifying that the output is correct. A checker for such a witness is usually much simpler than the original algorithm—yet it is all the user has to trust. The verification of checkers is feasible with current tools and leads to computations that can be completely trusted. We describe a framework to seamlessly verify certifying computations. We use the automatic verifier VCC for establishing the correctness of the checker and the interactive theorem prover Isabelle/HOL for high-level mathematical properties of algorithms. We demonstrate the effectiveness of our approach by presenting the verification of typical examples of the industrial-level and widespread algorithmic library LEDA.  相似文献   

8.

Model-driven engineering (MDE) promotes the use of models throughout the software development cycle in order to increase abstraction and reduce software complexity. It favors the definition of domain-specific modeling languages (DSMLs) thanks to frameworks dedicated to meta-modeling and code generation like EMF (Eclipse Modeling Framework). The standard semantics of meta-models allows interoperability between tools such as language analysers (e.g., XText), code generators (e.g., Acceleo), and also model transformation tools (e.g., ATL). However, a major limitation of MDE is the lack of formal reasoning tools allowing to ensure the correctness of models. Indeed, most of the verification activities offered by MDE tools are based on the verification of OCL constraints on instances of meta-models. However, these constraints mainly deal with structural properties of the model and often miss out its behavioral semantics. In this work, we propose to bridge the gap between MDE and the rigorous world of formal methods in order to guarantee the correctness of both structural and behavioral properties of the model. Our approach translates EMF meta-models into an equivalent formal B specification and then injects models into this specification. The equivalence between the resulting B specification and the original EMF model is kept by proven design steps leading to a rigorous MDE technique. The AtelierB prover is used to guarantee the correctness of the model’s behavior with respect to its invariant properties, and the ProB model-checker is used to animate underlying execution scenarios which are translated back to the initial EMF model. Besides the use of these automatic reasoning tools in MDE, proved B refinements are also investigated in this paper in order to gradually translate abstract EMF models to concrete models which can then be automatically compiled into a programming language.

  相似文献   

9.
10.
11.
We review a number of formal verification techniques supported by STeP, the Stanford Temporal Prover, describing how the tool can be used to verify properties of several versions of the Bakery Mutual exclusion algorithm for mutual exclusion. We verify the classic two-process algorithm and simple variants, as well as an atomic parameterized version. The methods used include deductive verification rules, verification diagrams, automatic invariant generation, and finite-state model checking and abstraction.  相似文献   

12.
Recently, by defining suitable fuzzy temporal logics, temporal properties of dynamic systems are specified during model checking process, yet a few numbers of fuzzy temporal logics along with capable corresponding models are developed and used in system design phase, moreover in case of having a suitable model, it suffers from the lack of a capable model checking approach. Having to deal with uncertainty in model checking paradigm, this paper introduces a fuzzy Kripke model (FzKripke) and then provides a verification approach using a novel logic called Fuzzy Computation Tree Logic* (FzCTL*). Not only state space explosion is handled using well-known concepts like abstraction and bisimulation, but an approximation method is also devised as a novel technique to deal with this problem. Fuzzy program graph, a generalization of program graph and FzKripke, is also introduced in this paper in consideration of higher level abstraction in model construction. Eventually modeling, and verification of a multi-valued flip-flop is studied in order to demonstrate capabilities of the proposed models.  相似文献   

13.
Dynamic verification is a new approach to formal verification, applicable to generic algorithms such as those found in the Standard Template Library (STL, part of the Draft ANSI/ISO C++ Standard Library). Using behavioral abstraction and symbolic execution techniques, verifications are carried out at an abstract level such that the results can be used in a variety of instances of the generic algorithms without repeating the proofs. This is achieved by substituting for type parameters of generic algorithms special data types that model generic concepts by accepting symbolic inputs and deducing outputs using inference methods. By itself, this symbolic execution technique supports testing of programs with symbolic values at an abstract level. For formal verification one also needs to generate multiple program execution paths and use assertions (to handle while loops, for example), but the authors show how this can be achieved via directives to a conventional debugger program and an analysis database. The assertions must still be supplied, but they can be packaged separately and evaluated as needed by appropriate transfers of control orchestrated via the debugger. Unlike all previous verification methods, the dynamic verification method thus works without having to transform source code or process it with special interpreters. They include an example of the formal verification of an STL generic algorithm  相似文献   

14.
15.
Order-reduction is a standard automated approximation technique for computer-aided design, analysis, and simulation of many classes of systems, from circuits to buildings. To be used as a sound abstraction for formal verification, a measure of the similarity of behavior must be formalized and computed, which we develop in a computational way for a class of asymptotic stable linear systems as the main contributions of this paper. We have implemented the order-reduction as a sound abstraction process through a source-to-source model transformation in the HyST tool and use SpaceEx to compute sets of reachable states to verify properties of the full-order system through analysis of the reduced-order system. Our experimental results suggest systems with thousand of state variables can be reduced to systems with tens of state variables such that the order-reduction overapproximation error is small enough to prove or disprove safety properties of interest using current reachability analysis tools. Our results illustrate this approach is effective in tackling the state-space explosion problem for verification of high-dimensional linear systems.  相似文献   

16.
Interaction among autonomous agents in Multi-Agent Systems (MASs) is a key aspect for agents to coordinate with one another. Social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social commitments to develop a verifiable formal semantics through which communication protocols can be specified. Developing and implementing algorithmic model checking for social commitments have been recently addressed. However, model checking social commitments in the presence of uncertainty is yet to be investigated.In this paper, we propose a model checking technique for verifying social commitments in uncertain settings. Social commitments are specified in a modal logical language called Probabilistic Computation Tree Logic of Commitments (PCTLC). The modal logic PCTLC extends PCTL, the probabilistic extension of CTL, with modalities for commitments and their fulfillments. The proposed verification method is a reduction-based model checking technique to the model checking of PCTL. The technique is based upon a set of reduction rules that translate PCTLC formulae to PCTL formulae to take benefit of existing model checkers such as PRISM. Proofs that confirm the soundness of the reduction technique are presented. We also present rules that transform our new version of interpreted systems into models of Markov Decision Processes (MDPs) to be suitable for the PRISM tool. We implemented our approach on top of the PRISM model checker and verified some given properties for the Oblivious Transfer Protocol from the cryptography domain. Our simulation demonstrates the effectiveness of our approach in verifying and model checking social commitments in the presence of uncertainty. We believe that the proposed formal verification technique will advance the literature of social commitments in such a way that not only representing social commitments in uncertain settings is doable, but also verifying them in such settings becomes achievable.  相似文献   

17.
Inevitability properties in branching temporal logics are of the syntax foralldiamondsuit phi, where phi is an arbitrary (timed) CTL (Computation Tree Logic) formula. Such inevitability properties in dense-time logics can be analyzed with the greatest fixpoint calculation. We present algorithms to model-check inevitability properties. We discuss a technique for early decision on greatest fixpoint calculation which has shown promising performance against several benchmarks. We have experimented with various issues which may affect the performance of TCTL inevitability analysis. Specifically, our algorithms come with a parameter for the measurement of time-progress. We report the performance of our implementation with regard to various parameter values and with or without the non-Zeno computation requirement in the evaluation of greatest fixpoints. We have also experimented with safe abstraction techniques for model-checking TCTL inevitability properties. The experiment results help us in deducing rules for setting the parameter for verification performance. Finally, we summarize suggestions for configurations of efficient TCTL inevitability evaluation procedure.  相似文献   

18.
An approach is presented to the tripartite problem of modeling physical solids mathematically, representing the models in a computer, and using representations in geometric algorithms. Examples are primarily from the domain of manufacturing and design of discrete goods, but the results reported here have wider significance. Mathematical definitions can formalize many of our intuitions about three-dimensional (3-D) objects and operations on them. Representation-free (mathematical) models and functions allow formal properties to be defined for characterizing geometric representations. Three common representation schemes for 3-D objects are described briefly, along with some of their formal and informal properties. A rigorous, as opposed to ad hoc, approach to modeling has several advantages. Broadly, the conceptual complications and ambiguities which are endemic to ad hoc problem statements and solutions may be avoided by appealing to a precise mathematical semantics. Mathematical rigor is mandatory in applications such as automatic manufacturing in which correctness must be guaranteed and consistency and validity maintained.  相似文献   

19.
20.
Compilation or translation is not only an issue at the level of program but also at the level of specification. This talk considers the design of real-time software for Programmable Logic Controllers (PLCs) and discusses questions of correctness and optimization for translations between different semantic models. The need for such translations arises in order to bridge the gap between conceptually clear models on the one hand and computationally feasible models on the other hand.To exemplify this, the talk considers so-called Constraint Diagrams as formalizations of behavioural requirements and so-called PLC-Automata as design specifications. Both Constraint Diagrams and PLC-Automata are given a formal semantics in the Duration Calculus, a logic for real-time systems. To enable an automatic verification (model-checking) of designs against requirements, translations of Constraint Diagrams and PLC-Automata into Timed Automata are presented. Semantic correctness and optimization of these translations are discussed.The concepts presented in the talk are implemented in a tool called Moby-RT that besides the above verification facilities also offers a compiler for PLC code generation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号