排序方式: 共有13条查询结果,搜索用时 0 毫秒
11.
Alex Groce Klaus Havelund Gerard Holzmann Rajeev Joshi Ru-Gang Xu 《Annals of Mathematics and Artificial Intelligence》2014,70(4):315-349
In this paper we discuss the application of a range of techniques to the verification of mission-critical flight software at NASA’s Jet Propulsion Laboratory. For this type of application we want to achieve a higher level of confidence than can be achieved through standard software testing. Unfortunately, given the current state of the art, especially when efforts are constrained by the tight deadlines and resource limitations of a flight project, it is not feasible to produce a rigorous formal proof of correctness of even a well-specified stand-alone module such as a file system (much less more tightly coupled or difficult-to-specify modules). This means that we must look for a practical alternative in the area between traditional testing and proof, as we attempt to optimize rigor and coverage. The approaches we describe here are based on testing, model checking, constraint-solving, monitoring, and finite-state machine learning, in addition to static code analysis. The results we have obtained in the domain of file systems are encouraging, and suggest that for more complex properties of programs with complex data structures, it is possibly more beneficial to use constraint solvers to guide and analyze execution (i.e., as in testing, even if performed by a model checking tool) than to translate the program and property into a set of constraints, as in abstraction-based and bounded model checkers. Our experience with non-file-system flight software modules shows that methods even further removed from traditional static formal methods can be assisted by formal approaches, yet readily adopted by test engineers and software developers, even as the key problem shifts from test generation and selection to test evaluation. 相似文献
12.
KD Linch WE Miller RB Althouse DW Groce JM Hale 《Canadian Metallurgical Quarterly》1998,34(6):547-558
BACKGROUND: The objective of this work was to estimate the percentage of workers by industry that are exposed to defined concentrations of respirable crystalline silica dust. METHODS: An algorithm was used to estimate the percentage of total workers exposed to crystalline silica in 1993 at concentrations of at least 1, 2, 5, and 10 times the National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit (REL) of 0.05 mg/m3. Respirable crystalline silica air sampling data from regulatory compliance inspections performed by the Occupational Safety and Health Administration (OSHA), for the years 1979-1995, and recorded in the Integrated Management Information System (IMIS) were used to estimate exposures. Therefore, this work does not include industries such as mining and agriculture that are not covered by OSHA. The estimates are stratified by Standard Industrial Classification (SIC) codes. RESULTS: This work found that some of the highest respirable crystalline silica dust concentrations occurred in construction (masonry, heavy construction, and painting), iron and steel foundries (casting), and in metal services (sandblasting, grinding, or buffing of metal parts). It was found that 1.8% (13,800 workers) of the workers in SIC 174--Masonry, Stonework, Tile Setting, and Plastering--were exposed to at least 10 times the NIOSH REL. For SIC 162--Heavy Construction, Except Highway and Street Construction--this number is 1.3% (6,300 workers). SIC 172--Painting and Paper Hanging--which includes construction workers involved in sandblasting was found to have 1.9% (3,000 workers) exposed to at least 10 times the NIOSH REL. The industry that was found to have the highest percentage of workers (6%) exposed to at least the NIOSH REL was the cut stone and stone products industry. CONCLUSION: Not enough is being done to control exposure to respirable crystalline silica. Engineering controls should be instituted in the industries indicated by this work. 相似文献
13.
Measuring the distance between two program executions is a fundamental problem in dynamic analysis of software and useful in many test generation and debugging algorithms. This paper proposes a metric for measuring distance between executions and specializes it to an important application: determining similarity of failing test cases for the purpose of automated fault identification and localization in debugging based on automatically generated compiler tests. The metric is based on a causal concept of distance where executions are similar to the degree that changes in the program itself, introduced by mutation, cause similar changes in the correctness of the executions. Specifically, if two failing test cases for a compiler can be made to pass by applying the same mutant, those two tests are more likely to be due to the same fault. We evaluate our metric using more than 50 faults and 2,800 test cases for two widely used real‐world compilers and demonstrate improvements over state‐of‐the‐art methods for fault identification and localization. A simple operator selection approach to reducing the number of mutants can reduce the cost of our approach by 70%, while producing a gain in fault identification accuracy. We additionally show that our approach, although devised for compilers, is applicable as a conservative fault localization algorithm for other types of programs and can help triage certain types of crashes found in fuzzing non‐compiler programs more effectively than a state‐of‐the‐art technique. 相似文献