首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
This special section is devoted to a selection of papers that appeared originally in the Proceedings of TACAS 2001, the 7th International Conference on Tools and Algorithms for the Construction and Analysis of Systems [1] which took place in Genova, Italy in April 2001 as a constituent event of the European joint conferences on theory and Practice of Software. All papers present approaches, tools and algorithms that aim at extending the scope of formal techniques (coverage of systems specifications, data structures, size, and their trade-offs) for validation, verification, and testing of software systems. They are by no means a complete account of the numerous ways in which real software and software systems may become subject to rigorous investigation, but they provide an interesting sampling of novel approaches towards scalability of formal methods-based validation.  相似文献   

2.
The importance of verification for software products is being increasingly appreciated in industry, although still not to the level to make it a standard approach to high quality software in industry. Since 2005, a global initiative has been underway, started by eminent researchers in both industry and academia, with the aim of establishing and disseminating a culture of software verification from first principles by means of theories, tools and experiments. This special section contains a selection of contributions originally presented at the 2008 Workshop on Tools at VSTTE 2008, the conference on Verified Software: Theories, Tools and Experiments, in Toronto. The VSTTE series of conferences and workshops focuses on the challenge of verifying software systems. Within VSTTE, the scope of the Tools workshop are implementations and enabling techniques for program verifiers, which are important ingredients for the dissemination of principles and techniques among industrial practitioners. This special section complements a sister special section of the Journal on Formal Aspects of computing, Springer. While the FACJ papers address more foundational aspects of tool-based verification and tool construction, the present section presents two toolsets, reflections on usability for verification tools and a novel abstraction technique.  相似文献   

3.
This special section is devoted to a selection of contributions originally presented at the 1st International Workshop on Parallel and Distributed Model Checking (PDMC 2002), which took place in Brno, Czech Republic in September 2002 as a satellite event of the 13th conference on concurrency theory (CONCUR 2002). The growing importance of automated formal verification in industry is driving a growing interest in those aspects that have a direct impact on its applicability to real-world problems. One of the main technical challenges is in devising tools that allow one to handle large state spaces. In the last several years numerous approaches to solving this problem have been developed. Recently there has been increasing interest in parallelizing and distributing verification techniques. Papers that appear in this special section provide an interesting sampling of typical approaches in this direction.  相似文献   

4.
This special section features six papers concerned with state-of-the-art research in verification, model checking, and abstract interpretation; three research areas that share the goal to provide mathematically well-founded techniques for sound semantic analysis of computer systems. While each area takes a particular view on this problem, there is a growing awareness that a closer collaboration is fruitful and in the last decade methods that combine ideas from different areas have been developed. The papers in this special section are carefully revised and extended versions of articles that have first been presented at the VMCAI 2009 conference.  相似文献   

5.
This special section presents three extended and revised papers which originally appeared in the proceedings of the 12th edition of the conference “Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2006” held in Braga, Portugal late March 2006 as a constituent event of the “European joint conferences on Theory and Practice of Software”. All three papers deal with very promising extensions and integrations of formal verification techniques.  相似文献   

6.
This special section contains the revised and expanded versions of eight of the papers from the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) held in March/April 2004 in Barcelona, Spain. The conference proceedings appeared as volume 2988 in the Lecture Notes in Computer Science series published by Springer. TACAS is a forum for researchers, developers and users interested in rigorously based tools for the construction and analysis of systems. The conference serves to bridge the gaps between different communities – including but not limited to those devoted to formal methods, software and hardware verification, static analysis, programming languages, software engineering, real-time systems, and communications protocols – that share common interests in, and techniques for, tool development. Other more theoretical papers from the conference are collected in a special section of the Theoretical Computer Science journal.  相似文献   

7.
Verifying the IEEE 1394 FireWire Tree Identify Protocol with SMV   总被引:1,自引:0,他引:1  
This case study contains a formal verification of the IEEE 1394 FireWire tree identify protocol. Crucial properties of finite models of the protocol have been validated with state-of-the-art symbolic model checkers. Various optimisation techniques were applied to verify concrete and generic configurations. Received September 2001/Accepted in revised form September 2001 Correspondence and offprint requests to: Viktor Schuppan, Computer Systems Institute, ETH Zurich, 8092 Zurich, Switzerland. Email: Viktor.Schuppan@inf.ethz.ch  相似文献   

8.
This paper investigates the use of the so-called up to techniques for bisimulation in the framework of verification. We first introduce a method to exploit these techniques, that originate in theoretical works, in the applied context of verification. We then apply it on the π-calculus, in order to design an up to bisimulation verification algorithm for π-calculus terms. The outcome of such an effort is evaluated on two examples that were run on a prototype implementation. Published online: 18 July 2001  相似文献   

9.
When verifying concurrent systems, described by transition systems, state explosion is one of the most serious problems: systems are often described by transition systems with a prohibitive number of states. The primary cause of this problem is the parallel composition of interacting processes. In the recent years, compositional techniques have been developed to attack the state explosion problem. These techniques are based on dividing the verification task into simpler tasks, exploiting the natural decomposition of complex systems into processes. In this paper we present a formula-based compositional approach that allows us to deduce a property of a parallel composition of processes by checking it only on a component process. The approach can be automated and it is completely transparent to the user. Received: 17 May 2001 / 27 February 2002  相似文献   

10.
The goal of the RERS challenge is to evaluate the effectiveness of various verification and validation approaches on reactive systems, a class of systems that is highly relevant for industrial critical applications. The RERS challenge brings together researchers from different areas of software verification and validation, including static analysis, model checking, theorem proving, symbolic execution, and testing. The challenge provides a forum for experimental comparison of different techniques on specifically designed verification tasks. These benchmarks are automatically synthesized to exhibit chosen properties, and then enhanced to include dedicated dimensions of difficulty, such as conceptual complexity of the properties (e.g., reachability, safety, liveness), size of the reactive systems (a few hundred lines to millions of lines), and complexity of language features (arrays and pointer arithmetic). The STTT special section on RERS describes the results of the evaluations and the different analysis techniques that were used in the RERS challenges 2012 and 2013.  相似文献   

11.
The increasing level of automation in critical infrastructures requires development of effective ways for finding faults in safety critical software components. Synchronization in concurrent components is especially prone to errors and, due to difficulty of exploring all thread interleavings, it is difficult to find synchronization faults. In this paper we present an experimental study demonstrating the effectiveness of model checking techniques in finding synchronization faults in safety critical software when they are combined with a design for verification approach. We based our experiments on an automated air traffic control software component called the Tactical Separation Assisted Flight Environment (TSAFE). We first reengineered TSAFE using the concurrency controller design pattern. The concurrency controller design pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller is verified with respect to arbitrary numbers of threads using the infinite state model checking techniques implemented in the Action Language Verifier (ALV). The threads which use the controller classes are checked for interface violations using the finite state model checking techniques implemented in the Java Path Finder (JPF). We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted two sets of experiments using these verification techniques. First, we created 40 faulty versions of TSAFE using manual fault seeding. During this exercise we also developed a classification of faults that can be found using the presented design for verification approach. Next, we generated another 100 faulty versions of TSAFE using randomly seeded faults that were created automatically based on this fault classification. We used both infinite and finite state verification techniques for finding the seeded faults. The results of our experiments demonstrate the effectiveness of the presented design for verification approach in eliminating synchronization faults.  相似文献   

12.
Pragmatics of model checking: an STTT special section   总被引:1,自引:1,他引:0  
Since its discovery in the early 1980s, model checking has emerged as one of the most exciting areas in the field of formal verification of system correctness. The chief reason for this enthusiasm resides in the fact that model checkers establish in a fully automated manner whether or not a system satisfies formal requirements; users need not construct proofs. Until the early 1990s, however, the practical impact of model checking was modest, in large part because the state-explosion problem limited the applicability of the technology. Recent advances in the field have dramatically expanded the scope of model checking, however, and interest in it on the part of practitioners is growing. This special section surveys several recent approaches to attacking the state explosion that are supporting the continued advancement of model checking as a viable technology for improved system design.  相似文献   

13.
The interplay of real time and probability is crucial to the correctness of the IEEE 1394 FireWire root contention protocol. We present a formal verification of the protocol using probabilistic model checking. Rather than analyse the functional aspects of the protocol, by asking such questions as ‘Will a leader be elected?’, we focus on the protocol's performance, by asking the question ‘How certain are we that a leader will be elected sufficiently quickly?’ Probabilistic timed automata are used to formally model and verify the protocol against properties which require that a leader is elected before a deadline with a certain probability. We use techniques such as abstraction, reachability analysis and integer-time semantics to aid the model-checking process, and the efficacy of these techniques is compared. Received July 2001/Accepted in revised form November 2002 Correspondence and offprint requests to: Marta Kwiatkowska, School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK. Email: M.Z.Kwiatkowska@cs.bham.ac.uk  相似文献   

14.
High performance scientific computing software is of critical international importance as it supports scientific explorations and engineering. Software development in this area is highly challenging owing to the use of parallel/distributed programming methods and complex communication and synchronization libraries. There is very little use of formal methods to debug software in this area, given that the scientific computing community and the formal methods community have not traditionally worked together. The Utah Gauss project combines expertise from scientific computing and formal methods in addressing this problem. We currently focus on MPI programs which are the kind that run on over 60% of world's supercomputers. These are programs written in C / C++ / FORTRAN employing message passing concurrency supported by the Message Passing Interface (MPI) library. Large-scale MPI programs also employ shared memory threads to manage concurrency within smaller task sub-groups, capitalizing on the recent availability of small-scale (e.g. single-chip) shared memory multiprocessors; such mixed programming styles can result in additional bugs. MPI libraries themselves can be buggy as they strive to implement complex requirements employing aggressive techniques such as multi-threading. We have built a model extractor that extracts from MPI C programs a formal model consisting of communicating processes represented in Microsoft's Zing modeling language. MPI library functions are also being modeled in Zing. This allows us to run formal analysis on the models to detect bugs in the MPI programs being analyzed. Our preliminary results and future plans are described; in addition, our contribution is to expose the special needs of this area and suggest specific avenues for problem- driven advances in software model-checking applied to scientific computing software development and verification.  相似文献   

15.
: This paper presents a new three-stage verification system which is based on three types of features: global features; local features of the corner points; and function features that contain information of each point of the signatures. The first verification stage implements a parameter-based method, in which the Mahalanobis distance is used as a dissimilarity measure between the signatures. The second verification stage involves corner extraction and corner matching. It also performs signature segmentation. The third verification stage implements a function-based method, which is based on an elastic matching algorithm establishing a point-to-point correspondence between the compared signatures. By combining the three different types of verification, a high security level can be reached. According to our experiments, the rates of false rejection and false acceptance are, respectively, 5.8% and 0%. Received: 12 Febuary 2001, Received in revised form: 24 May 2001, Accepted: 03 July 2001  相似文献   

16.
17.
In this paper we take a closer look at the automated analysis of designs, in particular of verification by model checking. Model checking tools are increasingly being used for the verification of real-life systems in an industrial context. In addition to ongoing research aimed at curbing the complexity of dealing with the inherent state space explosion problem – which allows us to apply these techniques to ever larger systems – attention must now also be paid to the methodology of model checking, to decide how to use these techniques to their best advantage. Model checking “in the large” causes a substantial proliferation of interrelated models and model checking sessions that must be carefully managed in order to control the overall verification process. We show that in order to do this well both notational and tool support are required. We discuss the use of software configuration management techniques and tools to manage and control the verification trajectory. We present Xspin/Project, an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin. Published online: 18 June 2002  相似文献   

18.
Verification and validation are terms that have been used for several years in software engineering, and there are now many verification and validation techniques, plus considerable experience and expertise in using them. However, applying these techniques to knowledge-based system is not straightforward. The essential differences between conventional systems and knowledge-based systems suggest that these techniques must be expanded and adapted, but new techniques are also needed. This article has two major goals: first, it makes some comparisons between verification and validation as found in traditional software engineering and knowledge-based systems, pointing out what is special about the latter as compared with the former; second, it provides a framework for a discussion of the various European work on verification and validation of knowledge-based systems. The perspective put forward in this article allows for a vast amound of work to be surveyed and analysed beyond the implementation level, by differentiating the symbol level and the knowledge level within a knowledge-based system.  相似文献   

19.
This article describes symbolic approximation, a theoretical foundation for techniques evolved for large-scale verification – in particular, for post hoc verification of the C code in large-scale open-source projects such as the Linux kernel. The corresponding toolset’s increasing maturity means that it is now capable of treating millions of lines of C code source in a few hours on very modest support platforms. In order to explicitly manage the state-space-explosion problem that bedevils model-checking approaches, we work with approximations to programs in a symbolic domain where approximation has a well-defined meaning. A more approximate program means being able to say less about what the program does, which means weaker logic for reasoning about the program. So we adjust the approximation by adjusting the applied logic. That guarantees a safe approximation (one which may generate false alarms but no false negatives) provided the logic used is weaker than the exact logic of C. We choose the logic to suit the analysis.  相似文献   

20.
Analysis of Timed Systems Using Time-Abstracting Bisimulations   总被引:1,自引:0,他引:1  
The objective of this paper is to show how verification of dense-time systems modeled as timed automata can be effectively performed using untimed verification techniques. In that way, the existing rich infrastructure in algorithms and tools for the verification of untimed systems can be exploited. The paper completes the ideas introduced in (Tripakis and Yovine, 1996, in Proc. 8th Conf. Computer-Aided Verification, CAV'96, Rutgers, NJ. LNCS, Vol. 1102, Springer-Verlag, 1996, pp. 232–243).Our approach consists in two steps. First, given a timed system A, we compute a finite graph G which captures the behavior of A modulo the fact that exact time delays are abstracted away. Then, we apply untimed verification techniques on G to prove properties on A. As property-specification languages, we use both the linear-time formalism of timed Büchi automata (TBA) and the branching-time logic TCTL. Model checking A against properties specified as TBA or TCTL formulae comes down to applying, respectively, automata-emptiness or CTL model-checking algorithms on G.The abstraction of exact delays is formalized under the concept of time-abstracting bisimulations. We define three time-abstracting bisimulations which are strictly ordered with respect to their reduction power. The stronger of them preserves both linear- and branching-time properties whereas the two weaker ones preserve only linear-time properties.The finite graph G is the quotient A with respect to a time-abstracting bisimulation. Generating G is called minimization and can be done by adapting a partition-refinement algorithm to the timed case. The adapted algorithm is symbolic, that is, equivalence classes are represented as simple polyhedra. When these polyhedra are not convex, operations become expensive, therefore, we develop a partition-refinement technique which preserves convexity.We have implemented the minimization algorithm in a prototype module called minim, as part of the real-time verification platform KRONOS (Bozga et al., 1998, in CAV'98). minim connects KRONOS to the CADP tool suite for the verification of untimed graphs (Fernandez et al., 1992, in 14th Int. Conf. on Software Engineering). To demonstrate the practical interest behind our approach, we present two case studies, namely, Fischer's mutual exclusion protocol and the CSMA/CD communication protocol.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号