首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
ContextThe generation of dynamic test sequences from a formal specification, complementing traditional testing methods in order to find errors in the source code.ObjectiveIn this paper we extend one specific combinatorial test approach, the Classification Tree Method (CTM), with transition information to generate test sequences. Although we use CTM, this extension is also possible for any combinatorial testing method.MethodThe generation of minimal test sequences that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search-based approaches are required to find such (near) optimal test sequences.ResultsThe experimental analysis compares the search-based technique with a greedy algorithm on a set of 12 hierarchical concurrent models of programs extracted from the literature. Our proposed search-based approaches (GTSG and ACOts) are able to generate test sequences by finding the shortest valid path to achieve full class (state) and transition coverage.ConclusionThe extended classification tree is useful for generating of test sequences. Moreover, the experimental analysis reveals that our search-based approaches are better than the greedy deterministic approach, especially in the most complex instances. All presented algorithms are actually integrated into a professional tool for functional testing.  相似文献   

2.
ContextEvolutionary algorithms have proved to be successful for generating test data for path coverage testing. However in this approach, the set of target paths to be covered may include some that are infeasible. It is impossible to find test data to cover those paths. Rather than searching indefinitely, or until a fixed limit of generations is reached, it would be desirable to stop searching as soon it seems likely that feasible paths have been covered and all remaining un-covered target paths are infeasible.ObjectiveThe objective is to develop criteria to halt the evolutionary test data generation process as soon as it seems not worth continuing, without compromising testing confidence level.MethodDrawing on software reliability growth models as an analogy, this paper proposes and evaluates a method for determining when it is no longer worthwhile to continue searching for test data to cover un-covered target paths. We outline the method, its key parameters, and how it can be used as the basis for different decision rules for early termination of a search. Twenty-one test programs from the SBSE path testing literature are used to evaluate the method.ResultsCompared to searching for a standard number of generations, an average of 30–75% of total computation was avoided in test programs with infeasible paths, and no feasible paths were missed due to early termination. The extra computation in programs with no infeasible paths was negligible.ConclusionsThe method is effective and efficient. It avoids the need to specify a limit on the number of generations for searching. It can help to overcome problems caused by infeasible paths in search-based test data generation for path testing.  相似文献   

3.
Test data generation is always a key task in the field of software testing. In recent years, meta-heuristic search techniques have been considered as an effective way to assist test data generation in software structural testing. In this way, some representative test cases with high-coverage capability can be picked out from program input space. Harmony search (HS) is a recently developed algorithm and has been vigorously applied to various optimization problems. In the paper, we attempt to apply harmony search algorithm to generate test data satisfying branch coverage. At the preprocessing stage, the probes used for gathering coverage information are inserted into all branches via program static analysis. At the same time, the encoding and decoding styles between a test case and a harmony are also determined in advance. At the stage of test data searching, the subset of test data that has much stronger covering ability is stored in harmony memory. During the evolution process, one part of test suite is selected and adjusted from the harmony memory, and the other part is randomly generated from input space. Once a test suite is yielded after one-round search, its coverage can be measured by fitness function in our search algorithm. In our work, a new fitness function for branch coverage is constructed by comprehensively considering branch distance and branch weight. Here, the branch weight is determined by branch information in program, that is, the nesting level of a specific branch and the predicate types in it. Subsequently, the computed coverage metric is used for updating the test suite in the next round of searching. In order to validate the effectiveness of our proposed method, eight well-known programs are used for experimental evaluation. Experimental results show that the coverage of HS-based method is usually higher than those of other search algorithms, such as simulated annealing (SA) and genetic algorithm (GA). Meanwhile, HS demonstrates greater stability than SA and GA when varying the population size or performing repeated trials. That is to say, music-inspired HS algorithm is more suitable to generate test data for branch coverage in software structural testing.  相似文献   

4.
5.
按照测试用例自动生成技术的不同,将测试用例自动生成算法分为随机、遗传、蚁群、粒子群四类,对上述各类算法的现状和进展进行介绍、分析和探讨。最后,对软件测试用例自动生成的研究进行了总结。  相似文献   

6.
测试用例自动生成是实现Web服务自动化测试的关键,基于代数规约的传统测试技术均依赖于创建、初始化和复制被测对象等操作来验证测试结果的正确性,但第三方Web服务并不支持这些操作,无法将测试用例转换成可执行操作序列。一种可行的解决方案是将测试用例转换成只包含一个被测服务实例、不包括实例初始化、只对实例进行状态修改和检查的线性执行序列。改进已有工作,提出包含逆项的测试执行图TEG-I来描述测试用例执行过程中的状态变化,设计TEG-I构造算法和单线执行序列生成算法并实现相应原型工具。实验结果表明,提出的方法能够有效地自动完成测试用例生成,提高Web服务的可测试性。  相似文献   

7.
Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996–2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.  相似文献   

8.
Machine learning algorithms for event detection   总被引:1,自引:0,他引:1  
  相似文献   

9.
Spreadsheet languages have gained enormous popularity for simple numerical computations, but their power in non-numerical contexts has not been widely recognized. This paper presents spreadsheet implementations of several familiar sequence analysis tools. In many cases, spreadsheet notation clarifies the underlying algorithm, facilitating understanding of existing algorithms and promoting spontaneous experimentation. Spreadsheets also reveal, through their visible parallelism, opportunities for applying parallel processors to computationally challenging problems.  相似文献   

10.
Software and Systems Modeling - Recently, there has been increased interest in combining model-driven engineering and search-based software engineering. Such approaches use meta-heuristic search...  相似文献   

11.
Search-based software testing promises the ability to generate and evaluate large numbers of test cases at minimal cost. From an industrial perspective, this could enable an increase in product quality without a matching increase in the time and effort required to do so.Search-based software testing, however, is a set of quite complex techniques and approaches that do not immediately translate into a process for use with most companies.For example, even if engineers receive the proper education and training in these new approaches, it can be hard to develop a general fitness function that covers all contingencies. Furthermore, in industrial practice, the knowledge and experience of domain specialists are often key for effective testing and thus for the overall quality of the final software system. But it is not clear how such domain expertise can be utilized in a search-based system.This paper presents an interactive search-based software testing (ISBST) system designed to operate in an industrial setting and with the explicit aim of requiring only limited expertise in software testing. It uses SBST to search for test cases for an industrial software module, while also allowing domain specialists to use their experience and intuition to interactively guide the search.In addition to presenting the system, this paper reports on an evaluation of the system in a company developing a framework for embedded software controllers. A sequence of workshops provided regular feedback and validation for the design and improvement of the ISBST system. Once developed, the ISBST system was evaluated by four electrical and system engineers from the company (the ‘domain specialists’ in this context) used the system to develop test cases for a commonly used controller module. As well as evaluating the utility of the ISBST system, the study generated interaction data that were used in subsequent laboratory experimentation to validate the underlying search-based algorithm in the presence of realistic, but repeatable, interactions.The results validate the importance that automated software testing tools in general, and search-based tools, in particular, can leverage input from domain specialists while generating tests. Furthermore, the evaluation highlighted benefits of using such an approach to explore areas that the current testing practices do not cover or cover insufficiently.  相似文献   

12.
Sequences of events are a common type of data in various scientific and business applications, e.g. telecommunication network management, study of web access logs, biostatistics and epidemiology. A natural approach to modelling event sequences is using time-dependent intensity functions, indicating the expected number of events per time unit. In Bayesian modelling, piecewise constant functions can be utilized to model continuous intensities, if the number of segments is a model parameter. The reversible jump Markov chain Monte Carlo (RJMCMC) methods can be exploited in the data analysis. With very large quantities, these approaches may be too slow. We study dynamic programming algorithms for finding the best fitting piecewise constant intensity function, given a number of pieces. We introduce simple heuristics for pruning the number of the potential change points of the functions. Empirical evidence from trials on real and artificial data sets is provided, showing that the developed methods yield high performance and they can be applied to very large data sets. We also compare the RJMCMC and dynamic programming approaches and show that the results correspond closely. The methods are applied to fault-alarm sequences produced by large telecommunication networks.  相似文献   

13.
A new, parallel approach for generating Bresenham-type lines is developed. Coordinate pairs which approximate straight lines on a square grid are derived from line equations. These pairs serve as a basis for the development of four new parallel algorithms. One of the algorithms uses the fact that straight time generation is equivalent to a vector prefix sums calculation. The algorithms execute on a binary tree of processors. Each node in the tree performs a simple calculation that involves only additions and shifts. All four algorithms have time complexityO(log2 n) wheren in the form 2 m denotes the number of points generated andn-1 is the number of processors in the tree. This compares toO(n) for Bresenham's algorithm executed on a sequential processor. Pipelining can be used to achieve a constant time per line generation as long as line length is less thann.  相似文献   

14.
The existence of orderly analogues of graph generators proposed by Heap and Farrell is established. The modifications to these algorithms supply practical methods enabling one to generate exhaustive lists of graphs and locally restricted graphs; moreover, the difficulty involved in ensuring that no duplications occur in the list is greatly reduced.  相似文献   

15.
Search-based test-data generation has proved successful for code-level testing but almost no search-based work has been carried out at higher levels of abstraction. In this paper the application of such approaches at the higher levels of abstraction offered by MATLAB/Simulink models is investigated and a wide-ranging framework for test-data generation and management is presented. Model-level analogues of code-level structural coverage criteria are presented and search-based approaches to achieving them are described. The paper also describes the first search-based approach to the generation of mutant-killing test data, addressing a fundamental limitation of mutation testing. Some problems remain whatever the level of abstraction considered. In particular, complexity introduced by the presence of persistent state when generating test sequences is as much a challenge at the Simulink model level as it has been found to be at the code level. The framework addresses this problem. Finally, a flexible approach to test sub-set extraction is presented, allowing testing resources to be deployed effectively and efficiently.  相似文献   

16.
Reactive real-time systems have to react to external events within time constraints: Triggered tasks must execute within deadlines. It is therefore important for the designers of such systems to analyze the schedulability of tasks during the design process, as well as to test the system's response time to events in an effective manner once it is implemented. This article explores the use of genetic algorithms to provide automated support for both tasks. Our main objective is then to automate, based on the system task architecture, the derivation of test cases that maximize the chances of critical deadline misses within the system; we refer to this testing activity as stress testing. A second objective is to enable an early but realistic analysis of tasks' schedulability at design time. We have developed a specific solution based on genetic algorithms and implemented it in a tool. Case studies were run and results show that the tool (1) is effective at identifying test cases that will likely stress the system to such an extent that some tasks may miss deadlines, (2) can identify situations that were deemed to be schedulable based on standard schedulability analysis but that, nevertheless, exhibit deadline misses.
Marwa ShoushaEmail:
  相似文献   

17.
Low run-time overhead, self-adapting storage policies for priority queues called smart priority queue (SPQ) techniques are developed and evaluated. The proposed SPQ policies employ a low-complexity linear queue for near-head activities and a rapid-indexing variable bin-width calendar queue for distant events. The SPQ configuration is determined by monitoring queue access behavior using cost-scoring factors and then applying heuristics to adjust the organization of the underlying data structures. To illustrate and evaluate the method, an SPQ-based scheduler for discrete event simulation has been implemented and was used to assess the resulting efficiency, components of access time, and queue usage distributions of the existing and proposed algorithms. Results indicate that optimizing storage to the spatial distribution of queue access can decrease HOLD operation cost between 25% and 250% over existing algorithms such as calendar queues.  相似文献   

18.
Summary The well-known lower bound of log2 n! on the number of comparisons required to sort n items is extended to cover algorithms, such as replacement selection, which produce a sorted string whose length is a random variable. The case of algorithms which produce several strings is also discussed and these results are then applied to obtain an upper bound on the length of strings produced by a class of string generation algorithms.The authors would like to thank Professor R. Floyd for a stimulating discussion at an early stage in this work.  相似文献   

19.
Combinatorial algorithms for DNA sequence assembly   总被引:7,自引:0,他引:7  
The trend toward very large DNA sequencing projects, such as those being undertaken as part of the Human Genome Program, necessitates the development of efficient and precise algorithms for assembling a long DNA sequence from the fragments obtained by shotgun sequencing or other methods. The sequence reconstruction problem that we take as our formulation of DNA sequence assembly is a variation of the shortest common superstring problem, complicated by the presence of sequencing errors and reverse complements of fragments. Since the simpler superstring problem is NP-hard, any efficient reconstruction procedure must resort to heuristics. In this paper, however, a four-phase approach based on rigorous design criteria is presented, and has been found to be very accurate in practice. Our method is robust in the sense that it can accommodate high sequencing error rates, and list a series of alternate solutions in the event that several appear equally good. Moreover, it uses a limited form of multiple sequence alignment to detect, and often correct, errors in the data. Our combined algorithm has successfully reconstructed nonrepetitive sequences of length 50,000 sampled at error rates of as high as 10%.This research was supported by the National Library of Medicine under Grant R01-LM4960, by a postdoctoral fellowship from the Program in Mathematics and Molecular Biology of the University of California at Berkeley under National Science Foundation Grant DMS-8720208, and by a fellowship from the Centre de recherches mathématiques of the Université de Montréal.  相似文献   

20.
A positive integern is a perfect power if there exist integersx andk, both at least 2, such thatn=x k . The usual algorithm to recognize perfect powers computes approximatekth roots forklog 2 n, and runs in time O(log3 n log log logn).First we improve this worst-case running time toO(log3 n) by using a modified Newton's method to compute approximatekth roots. Parallelizing this gives anNC 2 algorithm.Second, we present a sieve algorithm that avoidskth-root computations by seeing if the inputn is a perfectkth power modulo small primes. Ifn is chosen uniformly from a large enough interval, the average running time isO(log2 n).Third, we incorporate trial division to give a sieve algorithm with an average running time ofO(log2 n/log2 logn) and a median running time ofO(logn).The two sieve algorithms use a precomputed table of small primes. We give a heuristic argument and computational evidence that the largest prime needed in this table is (logn)1+O(1); assuming the Extended Riemann Hypothesis, primes up to (logn)2+O(1) suffice. The table can be computed in time roughly proportional to the largest prime it contains.We also present computational results indicating that our sieve algorithms perform extremely well in practice.This work forms part of the second author's Ph.D. thesis at the University of Wisconsin-Madison, 1991. This research was sponsored by NSF Grants CCR-8552596 and CCR-8504485.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号