首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ContextTesting from finite state machines has been investigated due to its well-founded and sound theory as well as its practical application. There has been a recurrent interest in developing methods capable of generating test suites that detect all faults in a given fault domain. However, the proposal of new methods motivates the comparison with traditional methods.ObjectiveWe compare the methods that generate complete test suites from finite states machines. The test suites produced by the W, HSI, H, SPY, and P methods are analyzed in different configurations.MethodComplete and partial machines were randomly generated varying numbers of states, inputs, outputs, and transitions. These different configurations were used to compare test suite characteristics (number of resets, test case length) and the test suite length (i.e., the sum of the length of its test cases). The fault detection ratio was evaluated using mutation testing to produce faulty implementations with an extra state.ResultsOn average, the recent methods (H, SPY, and P) produced longer test cases but smaller test suites than the traditional methods (W, HSI). The recent methods generated test suites of similar length, though P produced slightly smaller test suites. The SPY and P methods had the highest fault detection ratios and HSI had the lowest. For all methods, there was a positive correlation between the number of resets and the test suite length and between the test case length and the fault detection ratio.ConclusionThe recent methods rely on fewer and longer test cases to reduce the overall test suite length, while the traditional methods produce more and shorter test cases. Longer test cases are correlated to fault detection ratio which favored SPY, though all methods have a ratio of over 92%.  相似文献   

2.
A new approach to software reliability modeling is discussed where variables indirectly related with software reliability are used to provide additional information for the modeling process. Previous studies, empirical and theoretical evidences, and results from experiments indicate that there is a strong relationship between software reliability and coverage of program elements required to be exercised by structural testing criteria. This paper develops a binomial type coverage-based software reliability model through the definition of a coverage-based failure rate function. The Binomial software reliability Model Based on Coverage—BMBC—is proposed and discussed. In the BMBC test data between failures is used instead of time as independent variable; the model was assessed with test data from a real application, making use of the following structural testing criteria: all-nodes, all-edges, and potential-uses—a data-flow based family of testing criteria. The results from our experiments have shown that our modeling approach has some advantages over some traditional reliability models and points to a very promising research direction in software reliability.
José Carlos MaldonadoEmail:
  相似文献   

3.
The aim of this paper is to introduce a systematic approach to integration testing of software systems. Various test data selection criteria for integration testing are presented, coverage measures are introduced, and interconnection between them are discussed. The main principle is to transfer and adapt test criteria and coverage measures which are useful for unit testing to the level of integration testing. Test criteria help the tester to organise the test process. They should be chosen in accordance with the available test effort. Test coverage measures are defined as a ratio between the test cases required for satisfying the criteria and those of these which have been executed. The measures are used to obtain information about the completeness of integration tests. The approach is described for data flow and control flow oriented criteria and measures. The intention is to enable the tester to specify integration tests in advance in terms of effort, and to evaluate the results in terms of test completeness.  相似文献   

4.
5.
Graphical user interfaces are pervasive in modern software systems, and to ensure their quality it is important to test them. Two primary classes of automated GUI testing approaches, those based on static models and those based on dynamic event-extraction, present tradeoffs in cost and effectiveness. For example, static model-based GUI testing techniques can create test cases that contain nonexecutable events, whereas dynamic event-extraction based GUI testing techniques can create larger numbers of duplicate test cases. To better understand the effects of these tradeoffs, we created a GUI testing framework that facilitates fair comparison of different GUI testing techniques, and we conducted a controlled experiment comparing representative versions of static-model based and dynamic event-extraction based testing techniques on several GUI-based Java applications. Our study reveals several cost and effectiveness tradeoffs between the techniques, with implications for research and practice.  相似文献   

6.

Context

The increasing presence of Object-Oriented (OO) programs in industrial systems is progressively drawing the attention of mutation researchers toward this paradigm. However, while the number of research contributions in this topic is plentiful, the number of empirical results is still marginal and mostly provided by researchers rather than practitioners.

Objective

This article reports our experience using mutation testing to measure the effectiveness of an automated test data generator from a user perspective.

Method

In our study, we applied both traditional and class-level mutation operators to FaMa, an open source Java framework currently being used for research and commercial purposes. We also compared and contrasted our results with the data obtained from some motivating faults found in the literature and two real tools for the analysis of feature models, FaMa and SPLOT.

Results

Our results are summarized in a number of lessons learned supporting previous isolated results as well as new findings that hopefully will motivate further research in the field.

Conclusion

We conclude that mutation testing is an effective and affordable technique to measure the effectiveness of test mechanisms in OO systems. We found, however, several practical limitations in current tool support that should be addressed to facilitate the work of testers. We also missed specific techniques and tools to apply mutation testing at the system level.  相似文献   

7.
Advances in digital technologies have contributed for significant reduction in accidents caused by hardware failures. However, the growing complexity of functions performed by embedded software has increased the number of accidents caused by software faults in critical systems. Moreover, due to the highly competitive market, software intensive subsystems are usually developed by different suppliers. Often these subsystems are required to interact with each other in order to provide a collaborative service. Testing approaches for subsystems integration support verification of the quality of service, focusing on the subsystems interfaces. The increasing complexity and tight coupling of real-time subsystems make integration testing unmanageable. The ad-hoc approach for testing is becoming less effective and more expensive. This article presents an integration testing approach denominated InRob, designed to verify the interoperability and robustness related to timing constraints of real-time embedded software. InRob guides the construction of services, based on formal models, aiming at the specifications of interoperability and robustness of test cases related to delays and time-outs of the messages exchanged in the interfaces of interconnected subsystems. The proposed formalism supports automatic test cases generation by verifying the relevant properties in the service behavioral model. As timing constraints are critical properties of aerospace systems, the feasibility of InRob is showed in the integration testing process of a telescope onboard in a satellite. The process is instantiated with existing testing tools and the case study is the software embedded in the telescope.  相似文献   

8.
ContextSoftware has become an innovative solution nowadays for many applications and methods in science and engineering. Ensuring the quality and correctness of software is challenging because each program has different configurations and input domains. To ensure the quality of software, all possible configurations and input combinations need to be evaluated against their expected outputs. However, this exhaustive test is impractical because of time and resource constraints due to the large domain of input and configurations. Thus, different sampling techniques have been used to sample these input domains and configurations.ObjectiveCombinatorial testing can be used to effectively detect faults in software-under-test. This technique uses combinatorial optimization concepts to systematically minimize the number of test cases by considering the combinations of inputs. This paper proposes a new strategy to generate combinatorial test suite by using Cuckoo Search concepts.MethodCuckoo Search is used in the design and implementation of a strategy to construct optimized combinatorial sets. The strategy consists of different algorithms for construction. These algorithms are combined to serve the Cuckoo Search.ResultsThe efficiency and performance of the new technique were proven through different experiment sets. The effectiveness of the strategy is assessed by applying the generated test suites on a real-world case study for the purpose of functional testing.ConclusionResults show that the generated test suites can detect faults effectively. In addition, the strategy also opens a new direction for the application of Cuckoo Search in the context of software engineering.  相似文献   

9.
Software validation embodies two notions: fault removal and fault forecasting. Statistical testing involves exercising a piece of software by supplying it with input values that are randomly selected according to a defined probability distribution on its input domain. It can be used as a practical tool for revealing faults in a fault removal phase, and for assessing software dependability in a fault forecasting phase. In both of these, its efficiency is linked to the adequacy of the input probability distribution with respect to the test experiment goal. In this paper a mixed validation strategy combining deterministic and random test data is defined, and the theoretical and experimental work performed to support the strategy is reported. The quoted results relate to the unit testing of four real programs from the nuclear field. They confirm the high fault revealing power of statistical structural testing. Two main directions for further investigation of statistical testing are indicated by the reported work. These are described and solutions to the associated problems are outlined.  相似文献   

10.
Evaluation of the adequacy of a test set consisting of one or more test cases is a problem oftes encountered in software testing environments. Two test adequacy criiteria are considered, namely the data flow based all-uses criterion and a mutation based criterion. An empirical study was conducted to compare the ‘difficulty’ of satisfying the two criteria and their costs. Similar studies conducted in the past are discussed in the light of this study. A discussion is also presented of how and why the results of this study, when viewed in conjunction with the results of earlier comparisons of testing methods, are useful to a software test team.  相似文献   

11.
The effectiveness in discovering errors of symbolic evaluation and of testing sad static program analysis are studied. The three techniques are applied to a diverse collection of programs and the results compared. Symbolic evaluation is used to carry out symbolic testing and to generate symbolic systems of path predicates. The use of the predicates for automated test data selection is analysed. Several conventional types of program testing strategies are evaluated. The strategies include branch testing, structured testing and testing on input values having special properties. The static source analysis techniques that are studied include anomaly analysis and interface analysis. Examples are included which describe typical situations in which one technique is reliable but another unreliable. The effectiveness of symbolic testing is compared with testing on actual data and with the use of an integrated methodology that includes both testing and static source analysis. Situations in which symbolic testing is difficult to apply or not effective are discussed. Different ways in which symbolic evaluation can be used for generating test data are described. Those ways for which it is most effective are isolated. The paper concludes with a discussion of the most effective uses to which symbolic evaluation can he put in an integrated system which contains all three of the validation techniques that are studied.  相似文献   

12.
The term grammar-based software describes software whose input can be specified by a context-free grammar. This grammar may occur explicitly in the software, in the form of an input specification to a parser generator, or implicitly, in the form of a hand-written parser. Grammar-based software includes not only programming language compilers, but also tools for program analysis, reverse engineering, software metrics and documentation generation. Hence, ensuring their completeness and correctness is a vital prerequisite for their use. In this paper we propose a strategy for the construction of test suites for grammar based software, and illustrate this strategy using the ISO C + +  grammar. We use the concept of grammar-rule coverage as a pivot for the reduction of an implementation-based test suite, and demonstrate a significant decrease in the size of this suite. The effectiveness of this reduced test suite is compared to the original test suite with respect to code coverage and more importantly, fault detection. This work greatly expands upon previous work in this area and utilises large scale mutation testing to compare the effectiveness of grammar-rule coverage to that of statement coverage as a reduction criterion for test suites of grammar-based software. This work finds that when grammar rule coverage is used as the sole criterion for reducing test suites of grammar based software, the fault detection capability of that reduced test suite is greatly diminished when compared to other coverage criteria such as statement coverage.
James F. PowerEmail:
  相似文献   

13.
Program mutation is a fault-based technique for measuring the effectiveness of test cases that, although powerful, is computationally expensive. The principal expense of mutation is that many faulty versions of the program under test, called mutants, must be created and repeatedly executed. This paper describes a tool, called JavaMut, that implements 26 traditional and object-oriented mutation operators for supporting mutation analysis of Java programs. The current version of that tool is based on syntactic analysis and reflection for implementing mutation operators. JavaMut is interactive; it provides a graphical user interface to make mutation analysis faster and less painful. Thanks to such automated tools, mutation analysis should be achieved within reasonable costs.  相似文献   

14.
ContextFunction Block Diagram (FBD) is increasingly used in safety-critical applications. Test coverage issues for FBDs are frequently raised by regulators and users. However, there is little work at this aspect on testing FBD at model level. Our previous study has designed a new data-flow test coverage criterion, FB-Path Complete Condition Test Coverage (FPCC), that can directly test FBD structures and effectively detect function mutation errors. Nevertheless, because FPCC scheme involves several data-flow concepts and thus it is somewhat complicated to comprehend and to generate FPCC-complied test cases. An automatic test suite generator for FPCC is highly desirable.ObjectiveThis study designs an automatic test case generator, FPCCTestGen, for FPCC so as to enhance the practicability and acceptance of the FPCC approach.MethodFirst, a supporting infrastructure for performing automatic FBD-to-UPPAAL-for-FPCC transformation is designed. The supporting infrastructure includes templates, declarations, and functions as building blocks for transformation. Then, for each input FBD, represented in PLCopen XML format, FPCCTestGen performs parsing and converts FBD components into corresponding UPPAAL model components using aforementioned building blocks. After that, queries related to FPCC characteristics are submitted to UPPAAL model checker for verification. Finally, the verification traces are analyzed to obtain a FPCC-complied test suite.ResultsA safety injection system is used as a case study. Preliminary results show that the generated test suite achieves the highest FPCC percentage with a near optimal number of test cases.ConclusionThis automatic test case generation tool is effective and thus, can promote the use of the new test coverage criterion. Methodology used in FPCCTestGen is generic and can be applied to test suite generation for other test criteria on data-flow programs.  相似文献   

15.
ContextData-flow testing approaches have been used for procedural and object-oriented programs, and shown to be effective in detecting faults. However, few such approaches have been evaluated for aspect-oriented programs. In such programs, data-flow interactions can occur between base classes and aspects, which can affect the behavior of both. Faults resulting from such interactions are hard to detect unless the interactions are specifically targeted during testing.ObjectiveThis paper presents an approach and tool implementation for measuring data-flow coverage based on state variables defined in base classes or aspects in AspectJ programs. The paper also reports on an empirical study that compares the cost and effectiveness of data-flow test criteria that are based on state variables with two control-flow criteria.MethodEffectiveness of the criteria was evaluated for various fault types. Cost-effectiveness of test suites that cover all state variable definition-use associations (DUAs) was evaluated for three coverage levels: 100%, 90%, and 80%.ResultsThe effort needed to obtain a test case that achieves data-flow coverage is higher than the effort needed to obtain a test case that covers a block or a branch in an advised class. Covering certain data flow associations requires more effort than for other types of data flow associations. The data-flow test criteria based on state variables of a base-class are in general more effective than control-flow criteria.ConclusionsOverall, it is cost-effective to obtain test suites at the 90% coverage level of data-flow criteria.  相似文献   

16.

Context

A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.

Objective

In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.

Method

We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.

Results

Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.

Conclusion

Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.  相似文献   

17.
Software testing during the development process of embedded software is not only complex, but also the heart of quality control. Multi-core embedded software testing faces even more challenges. Major issues include: (1) how demanding efforts and repetitive tedious actions can be reduced; (2) how resource restraints of embedded system platform such as temporal and memory capacity can be tackled; (3) how embedded software parallelism degree can be controlled to empower multi-core CPU computing capacity; (4) how analysis is exercised to ensure sufficient coverage test of embedded software; (5) how to do data synchronization to address issues such as race conditions in the interrupt driven multi-core embedded system; (6) high level reliability testing to ensure customer satisfaction. To address these issues, this study develops an automatic testing environment for multi-core embedded software (ATEMES). Based on the automatic mechanism, the system can parse source code, instrument source code, generate testing programs for test case and test driver, support generating primitive, structure and object types of test input data, multi-round cross-testing, and visualize testing results. To both reduce test engineer's burden and enhance his efficiency when embedded software testing is in process, this system developed automatic testing functions including unit testing, coverage testing, multi-core performance monitoring. Moreover, ATEMES can perform automatic multi-round cross-testing benchmark testing on multi-core embedded platform for parallel programs adopting Intel TBB library to recommend optimized parallel parameters such as pipeline tokens. Using ATEMES on the ARM11 multi-core platform to conduct testing experiments, the results show that our constructed testing environment is effective, and can reduce burdens of test engineer, and can enhance efficiency of testing task.  相似文献   

18.
Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool, CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied ‘as is’, with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied ‘out of the box’ in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently.  相似文献   

19.
Error flow analysis and testing techniques focus on the introduction of errors through code faults into data states of an executing program, and their subsequent cancellation or propagation to output. The goals and limitations of several error flow techniques are discussed, including mutation analysis, fault-based testing, PIE analysis, and dynamic impact analysis. The attributes desired of a good error flow technique are proposed, and a model called dynamic error flow analysis (DEFA) is described that embodies many of these attributes. A testing strategy is proposed that uses DEFA information to select an optimal set of test paths and to quantify the results of successful testing. An experiment is presented that illustrates this testing strategy. In this experiment, the proposed testing strategy outperforms mutation testing in catching arbitrary data state errors.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号