首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In recent years, much effort has been devoted to the design and implementation of microprogramming languages that support the production of highly reliable yet efficient run time microcode. One of the goals for such languages is to facilitate the formal verification of microprograms using Hoare's inductive assertion method. Essential to the use of this method is an axiomatic definition of the microprogramming language.In this paper, we describe the axiomatization of a machine dependent microprogramming language called S*(QM-1)(1). This language is an instantiation of the machine independent language schema S*(2,3) based on the Nanodata QM-1 nanolevel architecture, and is designed for the development and specification ofhorizontal microprograms. We discuss the rationale underlying the design and axiomatization of this language and we show, using S*(QM-1) as a case study, some of the important points in which the verification of firmware differs from software verification.  相似文献   

3.
This paper presents an overview of the main results of the project Verification of ERLANG Programs , which is funded by the Swedish Business Development Agency (NUTEK) and by Ericsson within the ASTEC (Advanced Software TEChnology) initiative. Its main outcome is the ERLANG Verification Tool (EVT), a theorem prover which assists in obtaining proofs that ERLANG applications satisfy their correctness requirements formulated as behavioural properties in a modal logic with recursion. We give a summary of the verification framework as supported by EVT, discuss reasoning principles essential for successful proofs such as inductive and compositional reasoning, and an efficient treatment of side-effect-free code. The experiences of applying the tool in an industrial case study are summarised, and an approach for supporting verification in the presence of program libraries is outlined.EVT is essentially a classical proof assistant, or theorem-proving tool, requiring users to intervene in the proof process at crucial steps such as stating program invariants. However, the tool offers considerable support for automatic proof discovery through higher-level tactics tailored to the particular task of the verification of ERLANG programs. In addition, a graphical interface permits easy navigation through proof tableaux, proof reuse, and meaningful feedback about the current proof state, to assist users in taking informed proof decisions.  相似文献   

4.
Formal hardware verification methods: A survey   总被引:3,自引:1,他引:3  
Growing advances in VLSI technology have led to an increased level of complexity in current hardware systems. Late detection of design errors typically results in higher costs due to the associated time delay as well as loss of production. Thus it is important that hardware designs be free of errors. Formal verification has become an increasingly important technique towards establishing the correctness of hardware designs. In this article we survey the research that has been done in this area, with an emphasis on more recent trends. We present a classification framework for the various methods, based on the forms of the specification, the implementation, and the proff method. This framework enables us to better highlight the relationships and interactions between seemingly different approaches.  相似文献   

5.
In this paper, we describe a tool to verify Erlang programs and show, by means of an industrial case study, how this tool is used. The tool includes a number of components, including a translation component, a state space generation component and a model checking component. To verify properties of the code, the tool first translates the Erlang code into a process algebraic specification. The outcome of the translation is made more efficient by taking advantage of the fact that software written in Erlang builds upon software design patterns such as client–server behaviours. A labelled transition system is constructed from the specification by use of the CRL toolset. The resulting labelled transition system is model checked against a set of properties formulated in the -calculus using the Caesar/Aldébaran toolset.As a case study we focus on a simplified resource manager modelled on a real implementation in the control software of the AXD 301 ATM switch. Some of the key properties we verified for the program are mutual exclusion and non-starvation. Since the toolset supports only the regular alternation-free -calculus, some ingenuity is needed for checking the liveness property non-starvation. The case study has been refined step by step to provide more functionality, with each step motivated by a corresponding formal verification using model checking .  相似文献   

6.
The modelling of delay-insensitive asynchronous circuits in the process calculus CCS is addressed. MUST-testing (rather than bisimulation) is found to support verification both of the property of delay-insensitivity and of design by stepwise refinement. Automated verification is possible with a well-known tool, the Edinburgh Concurrency Workbench.  相似文献   

7.
A backward reasoning approach is described to show that the behavioral specification of a finite-state machine satisfies some high-level input-output specification. It is based on the principles of theorem proving. The proof may require inductive reasoning. The theorem prover seeks to find out the properties on which induction needs to be carried out and then proves such properties. Goal-reduction-based backward reasoning is employed. A brief account of the proof procedure is presented. This is followed by an example illustrating the approach. Finally, the approach is presented in an algorithmic form.  相似文献   

8.
9.
An overview of the verification of SET   总被引:4,自引:0,他引:4  
This paper describes the verification of Secure Electronic Transaction (SET), an e-commerce protocol by VISA and MasterCard. The main tasks are to comprehend the written documentation, to produce an accurate formal model, to identify specific protocol goals, and, finally, to prove them. The main obstacles are the protocols complexity (due in part to its use of digital envelopes) and its unusual goals involving partial information sharing. Our verification efforts show that the protocol does not completely satisfy its goals, although the flaws are minor. The primary outcome of the project is experience with verification of enormous and complicated protocols. This paper summarizes the project – the details appear elsewhere [11, 12 , 13 ] – focusing on the issues and the conclusions.  相似文献   

10.
An ‘open’ certification process is characterised here that is not based on any central agency, but rather on the option for any party to confirm any part of the certification process at will. The model for this paradigm has been a distributed, piece-wise, semantic audit carried out on the Linux kernel source code using a lightweight formal method.Our goal is a technology that allows open source developers to receive formally backed certifications for their project, in quid pro quo exchanges of resources and expertise with other developers within an amorphous and anonymous cloud of volunteers. To help ensure the integrity of the results, identifying details such as subroutine and variable names are not included in the data sent for analysis, each part of the computation is repeated many times at different sites, and checkpoint information is generated that enables independent checks to be carried out without starting from scratch each time.  相似文献   

11.
Real-time embedded systems are often designed with different types of urgencies such as delayable or eager, that are modeled by several urgency variants of the timed automata model. However, most model checkers do not support such urgency semantics, except for the IF toolset that model checks timed automata with urgency against observers. This work proposes an Urgent Timed Automata (UTA) model with zone-based urgency semantics that gives the same model checking results as absolute urgency semantics of other existing urgency variants of the timed automata model, including timed automata with deadlines and timed automata with urgent transitions. A necessary and sufficient condition, called complete urgency, is formulated and proved for avoiding zone partitioning so that the system state graphs are simpler and model checking is faster. A novel zone capping method is proposed that is time-reactive, preserves complete urgency, satisfies all deadlines, and does not need zone partitioning. The proposed verification methods were implemented in the SGM CTL model checker and applied to real-time and embedded systems. Several experiments, comparing the state space sizes produced by SGM with that by the IF toolset, show that SGM produces much smaller state-spaces.  相似文献   

12.
Translation validation was invented in the 90's by Pnueli et al. as a technique to formally verify the correctness of code generators. Rather than certifying the code generator or exhaustively qualifying it, translation validators attempt to verify that program transformations preserve semantics. In this work, we adopt this approach to formally verify that the clock semantics and data dependence are preserved during the compilation of the Signal compiler. Translation valida- tion is implemented for every compilation phase from the initial phase until the latest phase where the executable code is generated, by proving the transformation in each phase of the compiler preserves the semantics. We represent the clock semantics, the data dependence of a program and its trans- formed counterpart as first-order formulas which are called clock models and synchronous dependence graphs (SDGs), respectively. We then introduce clock refinement and depen- dence refinement relations which express the preservations of clock semantics and dependence, as a relation on clock mod- els and SDGs, respectively. Our validator does not require any instrumentation or modification of the compiler, nor any rewriting of the source program.  相似文献   

13.
In this paper I attempt to cast the current program verification debate within a more general perspective on the methodologies and goals of computer science. I show, first, how any method involved in demonstrating the correctness of a physically executing computer program, whether by testing or formal verification, involves reasoning that is defeasible in nature. Then, through a delineation of the senses in which programs can be run as tests, I show that the activities of testing and formal verification do not necessarily share the same goals and thus do not always constitute alternatives. The testing of a program is not always intended to demonstrate a program's correctness. Testing may seek to accept or reject nonprograms including algorithms, specifications, and hypotheses regarding phenomena. The relationship between these kinds of testing and formal verification is couched in a more fundamental relationship between two views of computer science, one properly containing the other.  相似文献   

14.
Reference counting is a commonly used technique for resource management. One key correctness criterion in the use of reference counts is that increment and decrement operations must be well-matched. In this paper we consider the problem of statically verifying that a given (sequential) program uses reference counts correctly: that is, that the program performs an equal number of increment and decrement operations on every object. We present a polynomial time algorithm for verifying this property when the program is allowed to have only shallow pointers: that is, the program may contain pointers to reference count objects, but the program does not contain pointers to pointers. We show that the problem is intractable if general (non-shallow) pointers are allowed. Our polynomial time algorithm, for the case of shallow pointers, works by reducing the problem to that of an affine-relation analysis problem.  相似文献   

15.
16.
Safety assessment of new air traffic management systems is a main issue for civil aviation authorities. Standard techniques such as testing and simulation have serious limitations in new systems that are significantly more autonomous than the older ones. In this paper, we present an innovative approach for establishing the correctness of conflict detection systems. Fundamental to our approach is the concept of trajectory, and how we represent a continuous physical trajectory by a continuous path in the x-y plane constrained by physical laws and operational requirements. From the model of trajectories, we extract, and formally prove, high-level properties that can serve as a framework to analyze conflict scenarios. We use the AILS (Airborne Information for Lateral Spacing) alerting algorithm as a case study of our approach. Published online: 19 November 2002  相似文献   

17.
This paper gives an overview of the RISC ProofNavigator, an interactive proving assistant for the area of program verification. The assistant combines the user-guided top-down decomposition of proofs with the automatic simplification and closing of proof states by an external satisfiability solver. The software exhibits a modern graphical user interface which has been developed with a focus on simplicity in order to make the software suitable for educational scenarios. Nevertheless, some use cases of a certain level of complexity demonstrate that it may be also appropriate for various realistic applications. D. A. Duce, J. Oliveira, P. Boca and R. Boute  相似文献   

18.
The correct functioning of interactive computer systems depends on both the faultless operation of the device and correct human actions. In this paper, we focus on system malfunctions due to human actions. We present abstract principles that generate cognitively plausible human behaviour. These principles are then formalised in a higher-order logic as a generic, and so retargetable, cognitive architecture, based on results from cognitive psychology. We instantiate the generic cognitive architecture to obtain specific user models. These are then used in a series of case studies on the formal verification of simple interactive systems. By doing this, we demonstrate that our verification methodology can detect a variety of realistic, potentially erroneous actions, which emerge from the combination of a poorly designed device and cognitively plausible human behaviour.  相似文献   

19.
This paper presents some results of integrating predicate transition nets with first order temporal logic in the specification and verification of concurrent systems. The intention of this research is to use predicate transition nets as a specification method and to use first order temporal logic as a verification method so that their strengths — the easy comprehension of predicate transition nets and the reasoning power of first order temporal logic can be combined. In this paper, a theoretical relationship between the computation models of these two formalisms is presented; an algorithm for systematically translating a predicate transition net into a corresponding temporal logic system is outlined; and a special temporal refutation proof technique is proposed and illustrated in verifying various concurrent properties of the predicate transition net specification of the five dining philosophers problem.  相似文献   

20.
This paper develops modular verification rules for Ada generics which are proven to be sound and complete. The generic mechanism in Ada allows modules to be parameterized by types, procedures and functions. The modularity property allows a generic to be verified once, and then exported to other modules which assume that it is correct. This requires the generic to have a specification which is used in verifying other modules, but its implementation cannot be used for this purpose. Thus, modular verification cannot be based on removing generics by macro expansion which requires the use of the generic's implementation. The main difficulty with specifying and verifying a generic is that the specification language may need to be extended with a new theory for specifying and reasoning about properties of objects whose type is a parameter to the generic. Such theories must be part of the specification of the generic, and this raises the possibility that the extended specification language may not be expressive, even if it was before the extension. The use of strings in our specification language prevents this from happening, which is proven in the paper; this is a major step toward establishing the completeness of our rules. Modularity also had a large impact on our semantics for programming constructs which is quite different from the usual semantics in the literature, even though it is still based the denotational semantics of Scott and Strachey. The main reason for this is that we had to modify the standard definition of validity. Modularity requires that validity depend on certain internal assertions in a program, such as the precondition of a procedure invoked in the program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号