共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Automated program repair is still a highly challenging problem mainly due to the reliance of the current techniques on test cases to validate candidate patches. This leads to the increasing unreliability of the final patches since test cases are partial specifications of the software. In the present paper, an automated program repair method is proposed by integrating genetic programming (GP) and model checking (MC). Due to its capabilities to verify the finite state systems, MC is employed as an appropriate criterion for evolving programs to calculate the fitness in GP. The application of MC for the fitness evaluation, which is novel in the context of program repair, addresses an important gap in the current heuristic approaches to the program repair. Being focused on fault detection based on the desired aspects, it enables the programmers to detect faults according to the definition of properties. Creating a general method, this characteristic can be effectively customized for different domains of application and the corresponding faults. Apart from various types of faults, the proposed method is capable of handling concurrency bugs which are not the case in many general repair methods. To evaluate the proposed method, it was implemented as a tool, named JBF, to repair Java programs. To meet the objectives of the study, some experiments were conducted in which certain programs with known bugs were automatically repaired by the JBF tool. The obtained results are encouraging and remarkably promising. 相似文献
3.
《The Journal of Logic and Algebraic Programming》2010,79(6):350-362
This paper presents some testing approaches based on model checking and using different testing criteria. First, test sets are built from different Kripke structure representations. Second, various rule coverage criteria for transitional, non-deterministic, cell-like P systems, are considered in order to generate adequate test sets. Rule based coverage criteria (simple rule coverage, context-dependent rule coverage and variants) are defined and, for each criterion, a set of LTL (Linear Temporal Logic) formulas is provided. A codification of a P system as a Kripke structure and the sets of LTL properties are used in test generation: for each criterion, test cases are obtained from the counterexamples of the associated LTL formulas, which are automatically generated from the Kripke structure codification of the P system. The method is illustrated with an implementation using a specific model checker, NuSMV. 相似文献
4.
This paper describes a method for automated test generation and evaluation for real-time expert systems. This method supports dynamic testing, where test inputs are generated randomly within the constraints specified by a Test Specification Language. This allows the discovery of “unintended functionalities,” which may not be possible either through static testing or expert-supplied test cases. Automated test generation also allows rapid regeneration of test suites as the system evolves through various prototypes and versions. the Test Specification Language provides constructs for dealing with real-time constraints. Sample specifications and results implemented within the Activation Framework software development tool are also described. © 1994 John Wiley & Sons, Inc. 相似文献
5.
Raquel Blanco Javier Tuya Belarmino Adenso-Díaz 《Information and Software Technology》2009,51(4):708-720
The techniques for the automatic generation of test cases try to efficiently find a small set of cases that allow a given adequacy criterion to be fulfilled, thus contributing to a reduction in the cost of software testing. In this paper we present and analyze two versions of an approach based on the scatter search metaheuristic technique for the automatic generation of software test cases using a branch coverage adequacy criterion. The first test case generator, called TCSS, uses a diversity property to extend the search of test cases to all branches of the program under test in order to generate test cases that cover these. The second, called TCSS-LS, is an extension of the previous test case generator which combines the diversity property with a local search method that allows the intensification of the search for test cases that cover the difficult branches. We present the results obtained by our generators and carry out a detailed comparison with many other generators, showing a good performance of our approach. 相似文献
6.
Automated software test data generation 总被引:3,自引:0,他引:3
An alternative approach to test-data generation based on actual execution of the program under test, function-minimization methods and dynamic data-flow analysis is presented. Test data are developed for the program using actual values of input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed then function-minimization search algorithms are used to automatically locate the values of input variables for which the selected path is traversed. In addition, dynamic data-flow analysis is used to determine those input variables responsible for the undesirable program behavior, significantly increasing the speed of the search process. The approach to generating test data is then extended to programs with dynamic data structures and a search method based on dynamic data-flow analysis and backtracking is presented. In the approach described, values of array indexes and pointers are known at each step of program execution; this information is used to overcome difficulties of array and pointer handling 相似文献
7.
The task of finding a set of test sequences that provides good coverage of industrial circuits is infeasible because of the size of the circuits. For small critical subcircuits of the design, however, designers can create a set of test sequences that achieve good coverage. These sequences cannot be used on the full design because the inputs to the subcircuit may not be accessible. In this work we present an efficient test generation algorithm that receives a test sequence created for the subcircuit and finds a test sequence for the full design that reproduces the given sequence on the subcircuit. The algorithm uses a new technique called dynamic transition relations to increase its efficiency .The most common and most expensive step in our algorithm is the computation of the set of predecessors of a set of states. To make this computation more efficient we exploit a partitioning of the transition relation into a set of simpler relations. At every step we use only those that are necessary, resulting in a smaller relation than the original one. A different relation is used for each step, hence the name dynamic transition relations. The same idea can be used to improve symbolic model checking for the temporal logic CTL.We have implemented the new method in SMV and run it on several large circuits. Our experiments indicate that the new method can provide gains of up to two orders of magnitude in time and space during verification. These results show that dynamic transition relations can make it possible to verify circuits that were previously unmanageable due to their size and complexity . 相似文献
8.
Testing is the most dominant validation activity used by industry today, and there is an urgent need for improving its effectiveness, both with respect to the time and resources for test generation and execution, and obtained test coverage. We present a new technique for automatic generation of real-time black-box conformance tests for non-deterministic systems from a determinizable class of timed automata specifications with a dense time interpretation. In contrast to other attempts, our tests are generated using a coarse equivalence class partitioning of the specification. To analyze the specification, to synthesize the timed tests, and to guarantee coverage with respect to a coverage criterion, we use the efficient symbolic techniques recently developed for model checking of real-time systems. Application of our prototype tool to a realistic specification shows promising results in terms of both the test suite size, and the time and space used for test generation. 相似文献
9.
10.
This paper describes a new systematic approach to code generation. The approach is based on an orthogonal model, in which implementation of language-level operators (‘operators’) and addressing operators (‘operands’) is achieved by two independent subtasks. Each of these phases is specified using a set of decision trees that encode the set of possible implementation templates for each language feature and the set of constraints under which each can be applied. Code selection in each phase is achieved by interpreting these trees using a single comprehensive selection algorithm. The method easily extends to machine independence across a large class of target computers by abstracting the implementation templates into machine-independent implementation strategies. The selection algorithm is then modified to select between implementation strategies based on a machine capability ‘menu’ that describes each target machine in terms of the subset of implementation strategies for which it has corresponding instruction sequences. The method has been used to implement a prototype machine-independent code generator for the Concurrent Euclid programming language whose generated code density is consistently within four per cent of production machine-dependent code generators across its entire target class of five modern computers. 相似文献
11.
Cyber-physical systems are to be found in numerous applications throughout society.The principal barrier to develop trustworthy cyber-physical systems is the lack of expressive modelling and specification formalisms supported by efficient tools and methodologies.To overcome this barrier,we extend in this paper the modelling formalism of the tool UPPAAL-SMC to stochastic hybrid automata,thus providing the expressive power required for modelling complex cyber-physical systems.The application of Statistical Model Checking provides a highly scalable technique for analyzing performance properties of this formalisms.A particular kind of cyber-physical systems are Smart Grids which together with Intelligent,Energy Aware Buildings will play a major role in achieving an energy efficient society of the future.In this paper we present a framework in UPPAAL-SMC for energy aware buildings allowing to evaluate the performance of proposed control strategies in terms of their induced comfort and energy profiles under varying environmental settings(e.g.weather,user behavior etc.).To demonstrate the intended use and usefulness of our framework,we present an application to the Hybrid Systems Verification Benchmark. 相似文献
12.
13.
The use of model checkers for automated software testing has received some attention in the literature: It is convenient because it allows fully automated generation of test suites for many different test objectives. On the other hand, model checkers were not originally meant to be used this way but for formal verification, so using model checkers for testing is sometimes perceived as a “hack”. Indeed, several drawbacks result from the use of model checkers for test case generation. If model checkers were designed or adapted to take into account the needs that result from the application to software testing, this could lead to significant improvements with regard to test suite quality and performance. In this paper we identify the drawbacks of current model checkers when used for testing. We illustrate techniques to overcome these problems, and show how they could be integrated into the model checking process. In essence, the described techniques can be seen as a general road map to turn model checkers into general purpose testing tools. 相似文献
14.
Robby Edwin Rodríguez Matthew B. Dwyer John Hatcliff 《International Journal on Software Tools for Technology Transfer (STTT)》2006,8(3):280-299
The use of assertions to express correctness properties of programs is growing in practice. Assertions provide a form of lightweight
checkable specification that can be very effective in finding defects in programs and in guiding developers to the cause of
a problem. A wide variety of assertion languages and associated validation techniques have been developed, but run-time monitoring
is commonly thought to be the only practical solution. In this paper, we describe how specifications written in the Java Modeling
Language (JML), a general purpose behavioral specification and assertional language for Java, can be validated using a customized
model checker built on top of the Bogor model checking framework. Our experience illustrates the need for customized state-space
representations and reduction strategies in model checking frameworks in order to effectively check the kind of strong behavioral
specifications that can be written in JML. We discuss the advantages and tradeoffs of model checking relative to other specification
validation techniques and present data that suggest that the cost of model checking strong specifications is practical for
several real programs.
This is an extended version of the paper Checking Strong Specifications Using An Extensible Model Checking Framework that appeared in Proceedings of the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS
2004. This work was supported in part by the U.S. Army Research Office (DAAD190110564), by DARPA/IXO’s PCES program (AFRL Contract
F33615-00-C-3044), by NSF (CCR-0306607) by Lockheed Martin, and by Rockwell-Collins. 相似文献
15.
Graph transformation has recently become more and more popular as a general, rule-based visual specification paradigm to formally capture (a) requirements or behavior of user models (on the model-level), and (b) the operational semantics of modeling languages (on the meta-level) as demonstrated by benchmark applications around the Unified Modeling Language (UML). The current paper focuses on the model checking-based automated formal verification of graph transformation systems used either on the model-level or meta-level. We present a general translation that inputs (i) a metamodel of an arbitrary visual modeling language, (ii) a set of graph transformation rules that defines a formal operational semantics for the language, and (iii) an arbitrary well-formed model instance of the language and generates a transitions system (TS) that serve as the underlying mathematical specification formalism of various model checker tools. The main theoretical benefit of our approach is an optimization technique that projects only the dynamic parts of the graph transformation system into the target transition system, which results in a drastical reduction in the state space. The main practical benefit is the use of existing back-end model checker tools, which directly provides formal verification facilities (without additional efforts required to implement an analysis tool) for many practical applications captured in a very high-level visual notation. The practical feasibility of the approach is demonstrated by modeling and analyzing the well-known verification benchmark of dining philosophers both on the model and meta-level. 相似文献
16.
Yannis Smaragdakis Christoph Csallner Ranjith Subramanian 《Automated Software Engineering》2009,16(1):73-99
We explore the automatic generation of test data that respect constraints expressed in the Object-Role Modeling (ORM) language.
ORM is a popular conceptual modeling language, primarily targeting database applications, with significant uses in practice.
The general problem of even checking whether an ORM diagram is satisfiable is quite hard: restricted forms are easily NP-hard
and the problem is undecidable for some expressive formulations of ORM. Brute-force mapping to input for constraint and SAT
solvers does not scale: state-of-the-art solvers fail to find data to satisfy uniqueness and mandatory constraints in realistic
time even for small examples. We instead define a restricted subset of ORM that allows efficient reasoning yet contains most
constraints overwhelmingly used in practice. We show that the problem of deciding whether these constraints are consistent
(i.e., whether we can generate appropriate test data) is solvable in polynomial time, and we produce a highly efficient (interactive
speed) checker. Additionally, we analyze over 160 ORM diagrams that capture data models from industrial practice and demonstrate
that our subset of ORM is expressive enough to handle their vast majority. 相似文献
17.
ContextAspect-oriented programming (AOP) has been promoted as a means for handling the modularization of software systems by raising the abstraction level and reducing the scattering and tangling of crosscutting concerns. Studies from literature have shown the usefulness and application of AOP across various fields of research and domains. Despite this, research shows that AOP is currently used in a cautious way due to its natural impact on testability and maintainability.ObjectiveTo realize the benefits of AOP and to increase its adoption, aspects developed using AOP should be subjected to automated testing. Automated testing, as one of the most pressing needs of the software industry to reduce both effort and costs in assuring correctness, is a delicate issue in testing aspect-oriented programs that still requires advancement and has a way to go before maturity.MethodPrevious attempts and studies in automated test generation process for aspect-oriented programs have been very limited. This paper proposes a rigorous automated test generation technique, called RAMBUTANS, with its tool support based on guided random testing for the AspectJ programs.ResultsThe paper reports the results of a thorough empirical study of 9 AspectJ benchmark programs, including non-trivial and larger software, by means of mutation analysis to compare RAMBUTANS and the four existing automated AOP testing approaches for testing aspects in terms of fault detection effectiveness and test effort efficiency. The results of the experiment and statistical tests supplemented by effect size measures presented evidence of the effectiveness and efficiency of the proposed technique at 99% confidence level (i.e. p < 0.01).ConclusionThe study showed that the resulting randomized tests were reasonably good for AOP testing, thus the proposed technique could be worth using as an effective and efficient AOP-specific automated test generation technique. 相似文献
18.
Web services designed for human users are being abused by computer programs (bots). The bots steal thousands of free e-mail accounts in a minute, participate in online polls to skew results, and irritate people by joining online chat rooms. These real-world issues have recently generated a new research area called human interactive proofs (HIP), whose goal is to defend services from malicious attacks by differentiating bots from human users. In this paper, we make two major contributions to HIP. First, based on both theoretical and practical considerations, we propose a set of HIP design guidelines that ensure a HIP system to be secure and usable. Second, we propose a new HIP algorithm based on detecting human face and facial features. Human faces are the most familiar object to humans, rendering it possibly the best candidate for HIP. We conducted user studies and showed the ease of use of our system to human users. We designed attacks using the best existing face detectors and demonstrated the challenge they presented to bots. 相似文献
19.
20.
This article, from the Motorola (now Freescale) PowerPC design group, presents an interesting synergy among test, equivalence verification, and constraints. The authors use RTL, gate, and switch models of a design in two different flows one for test and one for functional verification to show that rectifying constraints and merging tests between the-two flows saves significant presilicon debug effort. 相似文献