排序方式: 共有12条查询结果,搜索用时 15 毫秒
11.
Alex Groce Klaus Havelund Gerard Holzmann Rajeev Joshi Ru-Gang Xu 《Annals of Mathematics and Artificial Intelligence》2014,70(4):315-349
In this paper we discuss the application of a range of techniques to the verification of mission-critical flight software at NASA’s Jet Propulsion Laboratory. For this type of application we want to achieve a higher level of confidence than can be achieved through standard software testing. Unfortunately, given the current state of the art, especially when efforts are constrained by the tight deadlines and resource limitations of a flight project, it is not feasible to produce a rigorous formal proof of correctness of even a well-specified stand-alone module such as a file system (much less more tightly coupled or difficult-to-specify modules). This means that we must look for a practical alternative in the area between traditional testing and proof, as we attempt to optimize rigor and coverage. The approaches we describe here are based on testing, model checking, constraint-solving, monitoring, and finite-state machine learning, in addition to static code analysis. The results we have obtained in the domain of file systems are encouraging, and suggest that for more complex properties of programs with complex data structures, it is possibly more beneficial to use constraint solvers to guide and analyze execution (i.e., as in testing, even if performed by a model checking tool) than to translate the program and property into a set of constraints, as in abstraction-based and bounded model checkers. Our experience with non-file-system flight software modules shows that methods even further removed from traditional static formal methods can be assisted by formal approaches, yet readily adopted by test engineers and software developers, even as the key problem shifts from test generation and selection to test evaluation. 相似文献
12.
Error explanation with distance metrics 总被引:1,自引:0,他引:1
Alex Groce Sagar Chaki Daniel Kroening Ofer Strichman 《International Journal on Software Tools for Technology Transfer (STTT)》2006,8(3):229-247
In the event that a system does not satisfy a specification, a model checker will typically automatically produce a counterexample
trace that shows a particular instance of the undesirable behavior. Unfortunately, the important steps that follow the discovery
of a counterexample are generally not automated. The user must first decide if the counterexample shows genuinely erroneous
behavior or is an artifact of improper specification or abstraction. In the event that the error is real, there remains the
difficult task of understanding the error well enough to isolate and modify the faulty aspects of the system. This paper describes
a (semi-)automated approach for assisting users in understanding and isolating errors in ANSI C programs. The approach, derived
from Lewis’ counterfactual approach to causality, is based on distance metrics for program executions. Experimental results
show that the power of the model checking engine can be used to provide assistance in understanding errors and to isolate
faulty portions of the source code. 相似文献