首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
ContextThe Programmable Logic Controller (PLC) is being integrated into the automation and control of computer systems in safety–critical domains at an increasing rate. Thoroughly testing such software to ensure safety is crucial. Function Block Diagram (FBD) is a popular data-flow programming language for PLC. Current practice often involves translating an FBD program into an equivalent C program for testing. Little research has been conducted on coverage of direct testing a data-flow program, such as an FBD program, at the model level. There are no commonly accepted structural test coverage criteria for data-flow programs. The objective of this study is to develop effective structural test coverage criterion for testing model-level FBD programs. The proposed testing scheme can be used to detect mutation errors at the logical function level.ObjectiveThe purpose of this study is to design a new test coverage criterion that can directly test FBD programs and effectively detect logical function mutation errors.MethodA complete test set for each function and function block in an FBD program are defined. Moreover, this method augments the data-flow path concept with a sensitivity check to avoid fault masking and effectively detect logical function mutation errors.ResultsPreliminary experiments show that this test coverage criterion is comprehensive and effective for error detection.ConclusionThe proposed coverage criterion is general and can be applied to real cases to improve the quality of data-flow program design.  相似文献   

2.
Logic coverage criteria have been widely used in the testing of safety-critical software.In the past few years,fault-based logic coverage criteria have been studied intensively both in theory and in practice.However,there is a lack of authentic evidence of the comparison of fault-based logic coverage criteria with other logic coverage criteria,such as branch coverage and modified condition/decision coverage(MC/DC).In this paper,we present a comprehensive empirical study that compares these logic coverage criteria on test case prioritization,which is currently a hot topic in software testing.Several useful conclusions are drawn from our research:(1) Fine-grained coverage criteria are always more effective and efficient.(2) The effectiveness of fault-based logic coverage criteria is not significantly different from that of MC/DC in terms of statistics,but the former is more stable.(3) A random strategy is more effective than branch coverage if a certain number of test cases are redundant.  相似文献   

3.
4.
A number of coverage criteria have been proposed for testing classes and class clusters modeled with state machines. Previous research has revealed their limitations in terms of their capability to detect faults. As these criteria can be considered to execute the control flow structure of the state machine, we are investigating how data flow information can be used to improve them in the context of UML state machines. More specifically, we investigate how such data flow analysis can be used to further refine the selection of a cost‐effective test suite among alternative, adequate test suites for a given state machine criterion. This paper presents a comprehensive methodology to perform data flow analysis of UML state machines—with a specific focus on identifying the data flow from OCL guard conditions and operation contracts—and applies it to a widely referenced coverage criterion, the round‐trip path (transition tree) criterion. It reports on two case studies whose results show that data flow information can be used to select the best transition tree, in terms of cost effectiveness, when more than one satisfies the transition tree criterion. The results also suggest that different trees are complementary in terms of the data flow that they exercise, thus, leading to the detection of intersecting but distinct subsets of faults. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
High level Petri nets have been extensively used for modeling concurrent systems; however, their strong expressive power reduces their ability to be easily analyzed. Currently there are few effective formal analysis techniques to support the validation of high level Petri nets. The executable nature of high level Petri nets means that during validation they can be analyzed using test criteria defined on the net model. Recently, theoretical test adequacy coverage criteria for concurrent systems using high level Petri nets have been proposed. However, determining the applicability of these test adequacy criteria has not yet been undertaken. In this paper, we present an approach for evaluating the proposed test adequacy criteria for high level Petri nets through experimentation. In our experiments we use the simulation functionality of the model checker SPIN to analyze various test coverage criteria on high level Petri nets.  相似文献   

6.
7.
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. Three test data selection criteria that detect faults in six fault classes are analyzed. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules. Each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules  相似文献   

8.
To meet the challenge of creating test vectors for the Alpha 21164 microprocessor, Compaq's engineers describe a test generation and grading scheme that solves time-to-market, quality and cost concerns  相似文献   

9.
The primary advantage of model-based performance analysis is its ability to facilitate sensitivity and predictive analysis, in addition to providing an estimate of the application performance. To conduct model-based analysis, it is necessary to build a performance model of an application which represents the application structure in terms of the interactions among its components, using an appropriate modeling paradigm. While several research efforts have been devoted to the development of the theoretical aspects of model-based analysis, its practical applicability has been limited despite the advantages it offers. This limited practical applicability is due to the lack of techniques available to estimate the parameters of the performance model of the application. Since the model parameters cannot be estimated in a realistic manner, the results obtained from model-based analysis may not be accurate.In this paper, we present an empirical approach in which profile data in the form of block coverage measurements is used to parameterize the performance model of an application. We illustrate the approach using a network routing simulator called Maryland routing simulator (MaRS). Validation of the performance estimate of MaRS obtained from the performance model parameterized using our approach demonstrates the viability of our approach. We then illustrate how the model could be used for predictive performance analysis using two scenarios. By the virtue of using code coverage measurements to parameterize a performance model, we integrate two mature, yet independent research areas, namely, software testing and model-based performance analysis.  相似文献   

10.
The Modified Condition Decision Coverage (MCDC) test criterion is a mandatory requirement for the testing of avionics software as per the DO‐178B standard. This paper presents a formal analysis for the three different forms of MCDC. In addition, a recently proposed test criterion, Reinforced Condition Decision Coverage (RCDC), has also been investigated in comparison with MCDC. In contrast with the earlier analysis approaches that have been based on empirical and probabilistic models, the principles of Boolean ogic are used here to study the fault detection effectiveness of the MCDC and RCDC criteria. Based on the properties of Boolean specifications, the analysis identifies the detection conditions for six kinds of faults. The results allow the measurement of the effort required in testing and the effectiveness of generated test sets satisfying the MCDC and RCDC criteria. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we discuss ways in which coverage analysis, obtained during execution of test cases over a rule‐base, can be used to highlight problems in both the test suite and the rule‐base, thereby pointing to areas in which we cannot guarantee or predict the system’s performance. In particular, we present a series of heuristics which use coverage information and meta‐knowledge about the larger population to select additional test cases from the population, in the event that the initial test set is incomplete. This forms the basis of an incremental approach to rule‐base testing which allows us to both increase completeness of the test set and improve coverage of the rule‐base, thereby increasing the kinds of cases for which the rule‐base has been executed during testing. We demonstrate this heuristic approach to test data selection using information generated by TRUBAC, a tool which implements the coverage analysis methods, applied to analyze a prototype system for diagnosis of rheumatological diseases.  相似文献   

12.
Two facts about declarative programming prevent the application of conventional testing methods. First, the classical test coverage measures such as statement, branch or path coverage, cannot be used, since in declarative programs no control flow notion exists. Second, there is no widely accepted language available for formal specification, since predicate logic, which is the most common formalism for declarative programming, is already a very high-level abstract language. This paper presents a new approach exending previous work by the authors on test input generation for declarative programs. For this purpose, the existing program instrumentation notion is extended and a new logic coverage measure is introduced. The approach is mathematically formalized and the goal of achieving 100% program logic coverage controls the automatic test input generation. The method is depicted by means of logic programming; the results are, however, generally applicable. Finally, the concepts introduced have been used practically within a test environment. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
This study aimed to develop a set of evaluation criteria for English learning websites. These criteria can assist English teachers/web designers in designing effective websites for their English courses and can also guide English learners in screening for appropriate and reliable websites to use in increasing their English ability. To fulfill our objective, we employed a three-phase research procedure: (a) establishing a preliminary set of criteria from a thorough review of the literature, (b) evaluating and refining the preliminary criteria by conducting interviews with in-service teachers and learners, and (c) validating and finalizing the criteria according to expert validity surveys. The established criteria have 46 items, classified into 6 categories (the number of items within the category) – general information (12), integrated English learning (13), listening (4), speaking (6), reading (5), and writing (6). The general information evaluates the authority, accuracy, and format of the learning websites. The integrated English learning evaluates the overall information relevant to English learning materials as well as the common features of the four language skills. The criteria for listening, speaking, reading, and writing, for example, examine the suitable intonation, skills of discourse, classification of reading articles by their attributes, and the proper use of discussion boards for students when practicing their writing skills. Based on qualitative and quantitative analysis of the interviews and expert validity surveys, we confirmed the effectiveness of the developed evaluation criteria with satisfactory indexes of inter-rater reliability, content validity, and factorial validity.  相似文献   

14.
Test coverage is about assessing the relevance of unit tests against the tested application. It is widely acknowledged that software with a “good” test coverage is more robust against unanticipated execution, thus lowering the maintenance cost. However, ensuring good quality coverage is challenging, especially since most of the available test coverage tools do not discriminate between software components that require “strong” coverage from the components that require less attention from the unit tests.Hapao is an innovative test coverage tool, implemented in the Pharo Smalltalk programming language. It employs an effective and intuitive graphical representation to visually assess the quality of the coverage. A combination of appropriate metrics and relations visually shape methods and classes, which indicates to the programmer whether more effort on testing is required.This paper presents the important features of Hapao by illustrating its application on an open source software.  相似文献   

15.
LUSTRE is a data‐flow synchronous language, on which is based the SCADE tool‐suite, widely used for specifying and programming critical reactive applications in the areas of avionics, energy or transport. Therefore, testing LUSTRE programs, that is, generating test data and assessing the achieved test coverage, is a major issue. Usual control‐flow‐based test coverage criteria (statement coverage, branch coverage, etc.) are not relevant for LUSTRE programs. In this paper, a new hierarchy of adequacy criteria tailored to the LUSTRE language is presented. These criteria are defined on operator networks, which are usual models for LUSTRE programs. The criteria satisfaction measure is automated in LUSTRUCTU , a non‐intrusive tool (no instrumentation of the code), based on the symbolic computation of path activation conditions. The applicability and the relevance of the criteria are assessed on a case study. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
The market for software components is growing, driven on the “demand side” by the need for rapid deployment of highly functional products and, on the “supply side”, by distributed object standards. As components and component vendors proliferate, there is naturally a growing concern about quality and the effectiveness of testing processes. White-box testing, particularly the use of coverage criteria, Is a widely used method for measuring the “thoroughness” of testing efforts. High levels of test coverage are used as indicators of good quality control procedures. Software vendors who can demonstrate high levels of test coverage have a credible claim to high quality. However, verifying such claims involves knowledge of the source code, test cases, build procedures, etc. In applications where reliability and quality are critical, it would be desirable to verify test coverage claims without forcing vendors to give up valuable technical secrets. In this paper, we explore cryptographic techniques that can be used to verify such claims. Our techniques have certain limitations, which we discuss in this paper. However, vendors who have done the hard work of developing high levels of test coverage can use these techniques (for a modest additional cost) to provide credible evidence of high coverage, while simultaneously reducing disclosure of intellectual property  相似文献   

17.
The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current stud- ies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolution- ary algorithm is employed to solve the formulated model, and several types of fault detection methods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary opti- mization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.  相似文献   

18.
The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current studies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolutionary algorithmis employed to solve the formulatedmodel, and several types of fault detectionmethods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary optimization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.  相似文献   

19.
20.
The aim of this paper is to introduce a systematic approach to integration testing of software systems. Various test data selection criteria for integration testing are presented, coverage measures are introduced, and interconnection between them are discussed. The main principle is to transfer and adapt test criteria and coverage measures which are useful for unit testing to the level of integration testing. Test criteria help the tester to organise the test process. They should be chosen in accordance with the available test effort. Test coverage measures are defined as a ratio between the test cases required for satisfying the criteria and those of these which have been executed. The measures are used to obtain information about the completeness of integration tests. The approach is described for data flow and control flow oriented criteria and measures. The intention is to enable the tester to specify integration tests in advance in terms of effort, and to evaluate the results in terms of test completeness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号