首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
回归测试是软件演化过程中频繁进行的且开销巨大的一项任务,测试用例集的优化程度直接影响着测试的成本和效率。针对回归测试过程的特点,提出一种对测试用例集优化的新方法,即通过对测试用例集进行必要的消除冗余和调整排序,完成了对初始测试用例集的精简以及执行顺序的确定过程,使得有限的测试资源得到科学合理的分配。实验结果表明,相对于以往的测试用例集优化方法,新方法的效率和资源分配的合理性均有了显著的提高。  相似文献   

2.
为了全面测试演化软件,回归测试通常需要生成新的测试用例。concolic测试是一种沿着具体执行路径进行符号执行的软件验证技术,通过生成测试数据来执行程序的所有可行路径。回归测试中,由于concolic测试关注于程序本身,没有利用已有测试用例和软件演化信息,导致生成大量无效测试数据,浪费资源和时间。为解决此问题,提出一种基于路径引导的回归测试用例集扩增方法。该方法将目标路径作为引导,根据软件演化信息选择有利于覆盖目标路径的测试用例,利用已有测试用例跳过重叠初始子路径,对后续目标子路径进行concolic测试并生成覆盖目标路径的测试数据。案例分析表明,本文方法相比传统concolic测试,本方法在覆盖程序可行路径的同时,可有效减少concolic测试路径,提高测试数据生成效率。  相似文献   

3.
测试数据生成是组合测试的一个关键问题,但是组合测试用例集的构造问题的复杂度是NP完全的。提出了一种成对组合测试用例集整体优化和生成的方法。该方法通过编码机制将测试用例数据的生成问题转换为一个基于二进制编码的最优化问题,同时使用遗传算法对此编码空间进行搜索,并对所发现的最优个体进行解码,构造产生最佳测试用例集。实验结果表明,该方法简单高效,且具有解的质量高、时间消耗小的特点。  相似文献   

4.
Fault-based test suite prioritization for specification-based testing   总被引:1,自引:0,他引:1  

Context

Existing test suite prioritization techniques usually rely on code coverage information or historical execution data that serve as indicators for estimating the fault-detecting ability of test cases. Such indicators are primarily empirical in nature and not theoretically driven; hence, they do not necessarily provide sound estimates. Also, these techniques are not applicable when the source code is not available or when the software is tested for the first time.

Objective

We propose and develop the novel notion of fault-based prioritization of test cases which directly utilizes the theoretical knowledge of their fault-detecting ability and the relationships among the test cases and the faults in the prescribed fault model, based on which the test cases are generated.

Method

We demonstrate our approach of fault-based prioritization by applying it to the testing of the implementation of logical expressions against their specifications. We then validate our proposal by an empirical study that evaluates the effectiveness of prioritization techniques using two different metrics.

Results

A theoretically guided fault-based prioritization technique generally outperforms other techniques under study, as assessed by two different metrics. Our empirical results also show that the technique helps to reveal all target faults by executing only about 72% of the prioritized test suite, thereby reducing the effort required in testing.

Conclusions

The fault-based prioritization approach is not only applicable to the instance empirically validated in this paper, but should also be adaptable to other fault-based testing strategies. We also envisage new research directions to be opened up by our work.  相似文献   

5.
Prioritizing test cases for regression testing   总被引:1,自引:0,他引:1  
Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites  相似文献   

6.
Computation techniques have provided designers with deeper understanding of the market niches that were neglected before. Usage contextual information has been studied in marketing research since the last century; however, little research in design engineering focuses on it. Therefore, in this paper, we analyzed the relations between usage context information and the design of products. A usage coverage model is established to integrate users and their expected usage scenarios into product family assessment. We map the user’s individual capacity together with a given product into the usage context space. The overlapping between the required usage and feasible usage can be measured. Based on this mechanism, several usage coverage indices are proposed to assess the compliance of a given product family to the expected set of usage scenarios to be covered. The original method is demonstrated on a scale-based product family of jigsaws in a redesign context. Constraint programming technique is applied to solve the physics-based causal loops that determine usage performances in a set-based design approach. Designers can rely on the results to eliminate redundant units in the family or modify the configuration of each product. The contribution of the paper is to provide an inter-disciplinary point of view to assessing the composition and configuration of a product family design.  相似文献   

7.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

8.
为缩减测试用例规模,降低回归测试成本,将遗传算法和贪心算法相结合,提出了一种混合遗传算法用于解决测试用例最小化问题.算法对标准遗传算法中的选择、交叉和变异操作进行改善,提高了算法的全局寻优能力.同时,利用贪心算法处理可行解和不可行解,提高了算法的局部寻优能力.实验结果表明:与标准遗传算法相比,在保证测试完备性的前提下,混合遗传算法能够得到更优的缩减效果和更快的收敛速度.  相似文献   

9.
通过利用覆盖数据技术和回归测试集选择技术,提出一种用于回归测试数据验算的筛选方法,该方法通过深度优先遍历程序的相关记录来筛选测试用例,能有效地提高回归测试的准确性,减少回归测试的测试时间和省略无需测试的测试用例,以达到降低回归测试的成本的目的.对顺序、循环和分支3类程序设计了相关的实验,比较了该算法在这3类程序上的使用效果.  相似文献   

10.
ContextTest suite reduction is the problem of creating and executing a set of test cases that are smaller in size but equivalent in effectiveness to an original test suite. However, reduced suites can still be large and executing all the tests in a reduced test suite can be time consuming.ObjectiveWe propose ordering the tests in a reduced suite to increase its rate of fault detection. The ordered reduced test suite can be executed in time constrained situations, where, even if test execution is stopped early, the best test cases from the reduced suite will already be executed.MethodIn this paper, we present several approaches to order reduced test suites using experimentally verified prioritization criteria for the domain of web applications. We conduct an empirical study with three subject applications and user-session-based test cases to demonstrate how ordered reduced test suites often make a practical contribution. To enable comparison between test suites of different sizes, we develop Mod_APFD_C, a modification of the traditional prioritization effectiveness measure.ResultsWe find that by ordering the reduced suites, we create test suites that are more effective than unordered reduced suites. In each of our subject applications, there is at least one ordered reduced suite that outperforms the best unordered reduced suite and the best prioritized original suite.ConclusionsOur results show that when a tester does not have enough time to execute the entire reduced suite, executing an ordered reduced suite often improves the rate of fault detection. By coupling the underlying system’s characteristics with observations from our study on the criteria that produce the best ordered reduced suites, a tester can order their reduced test suites to obtain increased testing effectiveness.  相似文献   

11.
Regression testing is important activity during the software maintenance to deal with adverse effects of changes. Our approach is important for safety critical system as usually formal methods are preferred and highly recommended for the safety critical systems but they are also applied for the systems development of other than critical system. Our approach is based on Regression testing using VDM++ which takes two VDM++ specifications, one baseline and other delta (Changed) along with test suite for the baseline version. It compares both versions by using comparator module, identifies the change. By analyzing the change we classify the test cases from original test suite into obsolete, re-testable, and reusable test cases. Our scope is at unit level i.e. at class level. Our approach gets two versions of VDM++ specification and returns regression test suite for the delta version. Our approach distinguishes test cases which are still effective for the delta version of VDM++ specification and it differs from re-test all strategy as it can distinguish the test cases and identifies test cases which are useful for delta version. Test cases reusability and test case reduction is the main objective of our approach. Our approach presents how to perform regression testing using VDM++ specification during the maintenance of systems.  相似文献   

12.
ContextIn software industry, project managers usually rely on their previous experience to estimate the number men/hours required for each software project. The accuracy of such estimates is a key factor for the efficient application of human resources. Machine learning techniques such as radial basis function (RBF) neural networks, multi-layer perceptron (MLP) neural networks, support vector regression (SVR), bagging predictors and regression-based trees have recently been applied for estimating software development effort. Some works have demonstrated that the level of accuracy in software effort estimates strongly depends on the values of the parameters of these methods. In addition, it has been shown that the selection of the input features may also have an important influence on estimation accuracy.ObjectiveThis paper proposes and investigates the use of a genetic algorithm method for simultaneously (1) select an optimal input feature subset and (2) optimize the parameters of machine learning methods, aiming at a higher accuracy level for the software effort estimates.MethodSimulations are carried out using six benchmark data sets of software projects, namely, Desharnais, NASA, COCOMO, Albrecht, Kemerer and Koten and Gray. The results are compared to those obtained by methods proposed in the literature using neural networks, support vector machines, multiple additive regression trees, bagging, and Bayesian statistical models.ResultsIn all data sets, the simulations have shown that the proposed GA-based method was able to improve the performance of the machine learning methods. The simulations have also demonstrated that the proposed method outperforms some recent methods reported in the recent literature for software effort estimation. Furthermore, the use of GA for feature selection considerably reduced the number of input features for five of the data sets used in our analysis.ConclusionsThe combination of input features selection and parameters optimization of machine learning methods improves the accuracy of software development effort. In addition, this reduces model complexity, which may help understanding the relevance of each input feature. Therefore, some input parameters can be ignored without loss of accuracy in the estimations.  相似文献   

13.
针对持续集成环境下回归测试需要进行持续优化的问题,提出一种依据回归测试目标自适应调整策略的优化方法.首先将失败标志、缺陷检测数、重要性因子、新旧功能标志作为用例属性进行标记,根据历史数据和关联关系对用例属性进行初始化;而后根据阶段测试目标,区分新功能测试、修改性测试,将需求映射为具体的用例属性指标,据此对用例进行选择;计算重要性因子,更新用例属性标签,根据用例属性进行优先级自动排序;用例执行中,按照时间、资源要求,根据用例属性选择相应规模的测试用例进行执行.最后选择开源数据集进行实验,结果表明该方法针对不同的测试目标均能够降低执行用例的规模,提高缺陷检测效率.  相似文献   

14.
A fundamental problem when performing incremental learning is that the best set of a classification system's parameters can change with the evolution of the data. Consequently, unless the system self‐adapts to such changes, it will become obsolete, even if the application environment seems to be static. To address this problem, we propose a dynamic optimization approach in this paper that performs incremental learning in an adaptive fashion by tracking, evolving, and combining optimum hypotheses overtime. The approach incorporates various theories, such as dynamic particle swarm optimization, incremental support vector machine classifiers, change detection, and dynamic ensemble selection based on classifiers' confidence levels. Experiments carried out on synthetic and real‐world databases demonstrate that the proposed approach actually outperforms the classification methods often used in incremental learning scenarios. © 2011 Wiley Periodicals, Inc.  相似文献   

15.
16.
This paper describes a method of test-suite reduction for regression testing of the interaction between two software modules based on a model of module interaction on a test suite and subsequent filtering of the suite using the constructed model. The interaction model is constructed in terms of sequences of interface functions of the modules being integrated called during the software run.  相似文献   

17.
Testing a component embedded into a complex system, in which all other components are assumed fault‐free, is known as embedded testing. This paper proposes a method for minimizing a test suite to perform embedded testing. The minimized test suite maintains the fault coverage of the original test suite with respect to faults within the embedded component. The minimization uses the fact that the system is composed of a fault‐free context and a component under test, specified as communicating, possibly non‐deterministic finite state machines (FSMs). The method is illustrated using an example of telephone services on an intelligent network architecture. Other applications of the proposed approach for testing a system of communicating FSMs are also discussed. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Test suite reduction strategies aim to produce a smaller and representative suite that presents the same coverage as the original one but is more cost-effective. In the model-based testing (MBT) context, reduction is crucial since automatic generation algorithms may blindly produce several similar test cases. In order to define the degree of similarity between test cases, researchers have investigated a number of distance functions. However, there is still little or no knowledge on whether and how they influence on the performance of reduction strategies, particularly when considering MBT practices. This paper investigates the effectiveness of distance functions in the scope of a MBT reduction strategy based on the similarity degree of test cases. We discuss six distance functions and apply them to three empirical studies. The first two studies are controlled experiments focusing on two real-world applications (and real faults) and ten synthetic specifications automatically generated from the configuration of each application (and faults randomly generated). In the third study, we also apply the reduction strategy to two subsequent versions of an industrial application by considering real faults detected. Results show that the choice of a distance function has little influence on the size of the reduced test suite. However, as reduced suites are different depending on the distance function applied, the choice can significantly affect the fault coverage. Moreover, it can also affect the stability of the reduction strategy regarding coverage of different sets of faults on different executions.  相似文献   

19.
We propose a novel regression test case prioritization technique based on an analysis of a dependence model for object-oriented programs. We first construct an intermediate dependence model of a program from its source code. When the program is modified, the model is updated to reflect the changes. Our constructed model represents control and data dependencies as well as information pertaining to various types of dependencies arising from object relations such as association, inheritance and aggregation. We determine the affected nodes in the model by constructing the union of the forward slices corresponding to each changed model element. The test cases covering one or more affected nodes are selected for regression testing. The test cases in the selected regression test suite are then prioritized based on their weights and the weight of a test case is determined by assigning weights to the affected nodes. Our experimental results indicate that our approach on an average achieves an increase in the APFD metric value by 9.01 % as compared to a related approach.  相似文献   

20.
《Computer Networks》2000,32(3):347-364
This paper addresses the problem of interoperability testing for communication protocols. We develop a coherent framework of interoperability testing in which the notions of interoperability, interoperability testing, interoperability test case and interoperability test architecture are interrelated and a systematic interoperability test suite derivation method based on the framework. The approach to interoperability testing is illustrated with the example of the ATM Signaling Protocol. To demonstrate practicality of the approach, we implemented executable test suites derived by the method on a commercial ATM test platform, applied them for interoperability testing of various ATM equipment and analyzed the testing results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号