首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Grammars are traditionally used to recognize or parse sentences in a language, but they can also be used to generate sentences. In grammar‐based test generation (GBTG), context‐free grammars are used to generate sentences that are interpreted as test cases. A generator reads a grammar G and generates L(G), the language accepted by the grammar. Often L(G) is so large that it is not practical to execute all of the generated cases. Therefore, GBTG tools support ‘tags’: extra‐grammatical annotations which restrict the generation. Since its introduction in the early 1970s, GBTG has become well established: proven on industrial projects and widely published in academic venues. Despite the demonstrated effectiveness, the tool support is uneven; some tools target specific domains, e.g. compiler testing, while others are proprietary. The tools can be difficult to use and the precise meaning of the tags are sometimes unclear. As a result, while many testing practitioners and researchers are aware of GBTG, few have detailed knowledge or experience. We present YouGen, a new GBTG tool supporting many of the tags provided by previous tools. In addition, YouGen incorporates covering‐array tags, which support a generalized form of pairwise testing. These tags add considerable power to GBTG tools and have been available only in limited form in previous GBTG tools. We provide semantics for the YouGen tags using parse trees and a new construct, generation trees. We illustrate YouGen with both simple examples and a number of industrial case studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Search‐based techniques have been applied successfully to the task of generating unit tests for object‐oriented software. However, as for any meta‐heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open‐source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. © 2016 The Authors. Software Testing, Verification and Reliability Published by John Wiley & Sons Ltd.  相似文献   

3.
The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that, in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are high‐level frameworks, which utilize heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey‐box properties, for example safety constraints; and also non‐functional properties, such as worst‐case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
按照测试用例自动生成技术的不同,将测试用例自动生成算法分为随机、遗传、蚁群、粒子群四类,对上述各类算法的现状和进展进行介绍、分析和探讨。最后,对软件测试用例自动生成的研究进行了总结。  相似文献   

5.
Many modern automated test generators are based on either metaheuristic search techniques or use constraint solvers. Both approaches have their advantages, but they also have specific drawbacks: Search‐based methods may get stuck in local optima and degrade when the search landscape offers no guidance; constraint‐based approaches, on the other hand, can only handle certain domains efficiently. This paper describes a method that integrates both techniques and delivers the best of both worlds. On a high‐level view, the proposed method uses a genetic algorithm to generate tests, but the twist is that during evolution, a constraint solver is used to ensure that mutated offspring efficiently explores different control flow. Experiments on 20 case study programmes show that on average the combination improves branch coverage by 28% over search‐based techniques while reducing the number of tests by 55%, and improves coverage by 13% over constraint‐based techniques while reducing the number of tests by 73%. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Test data generation is one of the most technically challenging steps of testing software, but most commercial systems currently incorporate very little automation for this step. This paper presents results from a project that is trying to find ways to incorporate test data generation into practical test processes. The results include a new procedure for automatically generating test data that incorporates ideas from symbolic evaluation, constraint‐based testing, and dynamic test data generation. It takes an initial set of values for each input, and dynamically ‘pushes’ the values through the control‐flow graph of the program, modifying the sets of values as branches in the program are taken. The result is usually a set of values for each input parameter that has the property that any choice from the sets will cause the path to be traversed. This procedure uses new analysis techniques, offers improvements over previous research results in constraint‐based testing, and combines several steps into one coherent process. The dynamic nature of this procedure yields several benefits. Moving through the control flow graph dynamically allows path constraints to be resolved immediately, which is more efficient both in space and time, and more often successful than constraint‐based testing. This new procedure also incorporates an intelligent search technique based on bisection. The dynamic nature of this procedure also allows certain improvements to be made in the handling of arrays, loops, and expressions; language features that are traditionally difficult to handle in test data generation systems. The paper presents the test data generation procedure, examples to explain the working of the procedure, and results from a proof‐of‐concept implementation. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
An integrated automatic test data generation system   总被引:3,自引:0,他引:3  
The Godzilla automatic test data generator is an integrated collection of tools that implements a relatively new test data generation method—constraint-based testing—that is based on mutation analysis. Constraint-based testing integrates mutation analysis with several other testing techniques, including statement coverage, branch coverage, domain perturbation, and symbolic evaluation. Because Godzilla uses a rule-based approach to generate test data, it is easily extendible to allow new testing techniques to be integrated into the current system. This article describes the system that has been built to implement constraint-based testing. Godzilla's design emphasizes orthogonality and modularity, allowing relatively easy extensions. Godzilla's internal structure and algorithms are described with emphasis on internal structures of the system and the engineering problems that were solved during the implementation.Parts of this research were supported by Contract F30602-85-C-0255 through Rome Air Development Center while the author was a graduate student at the Georgia Institute of Technology.  相似文献   

8.
Automatic test data generation is a very popular domain in the field of search‐based software engineering. Traditionally, the main goal has been to maximize coverage. However, other objectives can be defined, such as the oracle cost, which is the cost of executing the entire test suite and the cost of checking the system behavior. Indeed, in very large software systems, the cost spent to test the system can be an issue, and then it makes sense by considering two conflicting objectives: maximizing the coverage and minimizing the oracle cost. This is what we did in this paper. We mainly compared two approaches to deal with the multi‐objective test data generation problem: a direct multi‐objective approach and a combination of a mono‐objective algorithm together with multi‐objective test case selection optimization. Concretely, in this work, we used four state‐of‐the‐art multi‐objective algorithms and two mono‐objective evolutionary algorithms followed by a multi‐objective test case selection based on Pareto efficiency. The experimental analysis compares these techniques on two different benchmarks. The first one is composed of 800 Java programs created through a program generator. The second benchmark is composed of 13 real programs extracted from the literature. In the direct multi‐objective approach, the results indicate that the oracle cost can be properly optimized; however, the full branch coverage of the system poses a great challenge. Regarding the mono‐objective algorithms, although they need a second phase of test case selection for reducing the oracle cost, they are very effective in maximizing the branch coverage. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
结构测试数据自动生成是结构测试结构测试数据自动生成方法后,重点对基于演化算法的结构测试数据自动生成方法加以评述.归纳了该方法的基本思想和基本流程,按照适应度函数构造方式的不同将其划分为面向覆盖法、面向距离法和综合法三大类,并结合相关文献分析了这三类方法各自的技术特点,比较了各自的优劣.最后,指出了存在的不足,探讨了发展方向.  相似文献   

10.
Although the majority of software testing in industry is conducted at the system level, most formal research has focused on the unit level. As a result, most system‐level testing techniques are only described informally. This paper presents formal testing criteria for system level testing that are based on formal specifications of the software. Software testing can only be formalized and quantified when a solid basis for test generation can be defined. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be automatically manipulated. This paper presents general criteria for generating test inputs from state‐based specifications. The criteria include techniques for generating tests at several levels of abstraction for specifications (transition predicates, transitions, pairs of transitions and sequences of transitions). These techniques provide coverage criteria that are based on the specifications and are made up of several parts, including test prefixes that contain inputs necessary to put the software into the appropriate state for the test values. The test generation process includes several steps for transforming specifications to tests. These criteria have been applied to a case study to compare their ability to detect seeded faults. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

11.
This paper presents a technique that uses a genetic algorithm for automatic test‐data generation. A genetic algorithm is a heuristic that mimics the evolution of natural species in searching for the optimal solution to a problem. In the test‐data generation application, the solution sought by the genetic algorithm is test data that causes execution of a given statement, branch, path, or definition–use pair in the program under test. The test‐data‐generation technique was implemented in a tool called TGen, in which parallel processing was used to improve the performance of the search. To experiment with TGen, a random test‐data generator called Random was also implemented. Both Tgen and Random were used to experiment with the generation of test‐data for statement and branch coverage of six programs. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
Existing automated test data generation techniques tend to start from scratch, implicitly assuming that no pre‐existing test data are available. However, this assumption may not always hold, and where it does not, there may be a missed opportunity; perhaps the pre‐existing test cases could be used to assist the automated generation of additional test cases. This paper introduces search‐based test data regeneration, a technique that can generate additional test data from existing test data using a meta‐heuristic search algorithm. The proposed technique is compared to a widely studied test data generation approach in terms of both efficiency and effectiveness. The empirical evaluation shows that test data regeneration can be up to 2 orders of magnitude more efficient than existing test data generation techniques, while achieving comparable effectiveness in terms of structural coverage and mutation score. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Software testing is one of the most crucial and analytical aspect to assure that developed software meets prescribed quality standards. Software development process invests at least 50% of the total cost in software testing process. Optimum and efficacious test data design of software is an important and challenging activity due to the nonlinear structure of software. Moreover, test case type and scope determines the quality of test data. To address this issue, software testing tools should employ intelligence based soft computing techniques like particle swarm optimization (PSO) and genetic algorithm (GA) to generate smart and efficient test data automatically. This paper presents a hybrid PSO and GA based heuristic for automatic generation of test suites. In this paper, we described the design and implementation of the proposed strategy and evaluated our model by performing experiments with ten container classes from the Java standard library. We analyzed our algorithm statistically with test adequacy criterion as branch coverage. The performance adequacy criterion is taken as percentage coverage per unit time and percentage of faults detected by the generated test data. We have compared our work with the heuristic based upon GA, PSO, existing hybrid strategies based on GA and PSO and memetic algorithm. The results showed that the test case generation is efficient in our work.  相似文献   

14.
Increasingly, modern‐day software systems are being built by combining externally‐developed software components with application‐specific code. For such systems, existing program‐analysis‐based software engineering techniques may not directly apply, due to lack of information about components. To address this problem, the use of component metadata has been proposed. Component metadata are metadata and metamethods provided with components, that retrieve or calculate information about those components. In particular, two component‐metadata‐based approaches for regression test selection are described: one using code‐based component metadata and the other using specification‐based component metadata. The results of empirical studies that illustrate the potential of these techniques to provide savings in re‐testing effort are provided. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
Oracles used for testing graphical user interface (GUI) programmes are required to take into consideration complicating factors such as variations in screen resolution or colour scheme when comparing observed GUI elements with expected GUI elements. Researchers proposed fuzzy comparison rules and computationally expensive image processing techniques to tame the comparison process because otherwise the naïve matching comparison would be too constraining and consequently impractical. Alternatively, this paper proposes GUICop, a novel approach with a supporting toolset that takes (1) a GUI programme and (2) user‐defined GUI specifications characterizing the rendering behaviour of the GUI elements and checks whether the execution traces of the programme satisfy the specifications. GUICop comprises the following: (1) a GUI Specification Language; (2) a Driver; (3) Instrumented GUI Libraries; 4) a Solver; and (5) a Code Weaver. The user defines the specifications of the subject GUI programme using the GUI Specification Language. The Driver traverses the GUI structure of the programme and generates events that drive its execution. The Instrumented GUI Libraries capture the GUI execution trace, ie, information about the positions and visibility of the GUI elements. And the Solver, enabled by code injected by the Code Weaver, checks whether the traces satisfy the specifications. GUICop was successfully evaluated using 4 open source GUI applications that included 8 defects, namely, Jajuk, Gason, JEdit, and TerpPaint.  相似文献   

16.
Identifying a finite test set that adequately captures the essential behaviour of a program such that all faults are identified is a well‐established problem. This is traditionally addressed with syntactic adequacy metrics (e.g. branch coverage), but these can be impractical and may be misleading even if they are satisfied. One intuitive notion of adequacy, which has been discussed in theoretical terms over the past three decades, is the idea of behavioural coverage: If it is possible to infer an accurate model of a system from its test executions, then the test set can be deemed to be adequate. Despite its intuitive basis, it has remained almost entirely in the theoretical domain because inferred models have been expected to be exact (generally an infeasible task) and have not allowed for any pragmatic interim measures of adequacy to guide test set generation. This paper presents a practical approach to incorporate behavioural coverage. Our BESTEST approach (1) enables the use of machine learning algorithms to augment standard syntactic testing approaches and (2) shows how search‐based testing techniques can be applied to generate test sets with respect to this criterion. An empirical study on a selection of Java units demonstrates that test sets with higher behavioural coverage significantly outperform current baseline test criteria in terms of detected faults. © 2015 The Authors. Software Testing, Verification and Reliability published by John Wiley & Sons, Ltd.  相似文献   

17.
If software cannot be tested exhaustively, it must be tested selectively. But, on what should selection be based in order to maximize test effectiveness? It seems sensible to concentrate on the parts of the software where the risks are greatest, but what risks should be sought and how can they be identified and analysed? ‘Risk‐based testing’ is a term in current use, for example in an accredited test‐practitioner's course syllabus, but there is no broadly accepted definition of the phrase and no literature or body of knowledge to underpin the subject implied by it. Moreover, there has so far been no suggestion that it requires an understanding of the subject of risk. This paper examines what is implied by risk‐based testing, shows that its practice requires an understanding of risk, and points to the need for research into the topic and the development of a body of knowledge to underpin it. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
In a previous article, a stress testing methodology was reported to detect network traffic‐related Real‐Time (RT) faults in distributed RT systems based on the design UML model of a System Under Test (SUT). The stress methodology, referred to as Test LOcation‐driven Stress Testing (TLOST), aimed at increasing the chances of RT failures (violations in RT constraints) associated with a given stress test location (an network or a node under test). As demonstrated and experimented in this article, although TLOST is useful in stress testing different test locations (nodes and network, it does not guarantee to target (test) all RT constraints in an SUT. This is because the durations of message sequences bounded by some RT constraints might never be exercised (covered) by TLOST. A complementary stress test methodology is proposed in this article, which guarantees to target (cover) all RT constraints in an SUT and detect their potential RT faults (if any). Using a case study, this article shows that the new complementary methodology is capable of targeting the RT faults not detected by the previous test methodology. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents two strategies for multi‐way testing (i.e. t‐way testing with t>2). The first strategy generalizes an existing strategy, called in‐parameter‐order, from pairwise testing to multi‐way testing. This strategy requires all multi‐way combinations to be explicitly enumerated. When the number of multi‐way combinations is large, however, explicit enumeration can be prohibitive in terms of both the space for storing these combinations and the time needed to enumerate them. To alleviate this problem, the second strategy combines the first strategy with a recursive construction procedure to reduce the number of multi‐way combinations that have to be enumerated. Both strategies are deterministic, i.e. they always produce the same test set for the same system configuration. This paper reports a multi‐way testing tool called FireEye, and provides an analytic and experimental evaluation of the two strategies. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
Gordon  Franz  Paul   《Journal of Systems and Software》2009,82(9):1403-1418
The use of model checkers for automated software testing has received some attention in the literature: It is convenient because it allows fully automated generation of test suites for many different test objectives. On the other hand, model checkers were not originally meant to be used this way but for formal verification, so using model checkers for testing is sometimes perceived as a “hack”. Indeed, several drawbacks result from the use of model checkers for test case generation. If model checkers were designed or adapted to take into account the needs that result from the application to software testing, this could lead to significant improvements with regard to test suite quality and performance. In this paper we identify the drawbacks of current model checkers when used for testing. We illustrate techniques to overcome these problems, and show how they could be integrated into the model checking process. In essence, the described techniques can be seen as a general road map to turn model checkers into general purpose testing tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号