首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Term Redundancy Method (TRM) is a novel approach for obtaining ultra‐reliable programs through specification‐based testing. Current specification‐based testing schemes need a prohibitively large number of test cases for estimating ultra‐reliability. They assume the availability of an accurate program‐usage distribution prior to testing, and they assume the availability of a test oracle. This paper shows how to obtain ultra‐reliable abstract data types specified with equational specifications, with a practical number of test cases, without an accurate usage distribution, and without the usual test oracle. The effectiveness of the TRM in failure detection and recovery is demonstrated on the aircraft collision avoidance system TCAS. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
一个面向分布式程序的测试系统框架   总被引:6,自引:2,他引:4  
顾庆  陈道蓄  韩杰  谢立  孙钟秀 《软件学报》2000,11(8):1053-1059
提出了一个面向分布式程序的测试系统框架TFDS(test system framework for distributed software system),并介绍了它在异构网络中的一个实现原型PSET*(distributed progra m structure and event trace, revised version).框架的主要功能是对分布式程序进行单 元测试和集成测试.包括面向规约设计和源码分析的静态部分和面向程序执行和事件序列分 析的动态部分.在构件的基础上,PSET*的功能可以较容易  相似文献   

3.
This paper presents a theory of testing that integrates into Hoare and He’s Unifying Theory of Programming (UTP). We give test cases a denotational semantics by viewing them as specification predicates. This reformulation of test cases allows for relating test cases via refinement to specifications and programs. Having such a refinement order that integrates test cases, we develop a testing theory for fault-based testing. Fault-based testing uses test data designed to demonstrate the absence of a set of pre-specified faults. A well-known fault-based technique is mutation testing. In mutation testing, first, faults are injected into a program by altering (mutating) its source code. Then, test cases that can detect these errors are designed. The assumption is that other faults will be caught, too. In this paper, we apply the mutation technique to both, specifications and programs. Using our theory of testing, two new test case generation laws for detecting injected (anticipated) faults are presented: one is based on the semantic level of UTP design predicates, the other on the algebraic properties of a small programming language.  相似文献   

4.
This paper presents a method to generate test cases for sequential programs and concurrent programs written in a flow based program language. Test cases of sequential programs are generated based on condition calculation, and can be combined together to form SYN-sequences for concurrent program testing. Semantics of the language provides an infrastructure for the test case generation, and thus our method may be considered as a rigorous and systematic approach to the program testing. Compared with some formal testing methods, our method can avoid hitting state explosion problem in the test formation. Besides, the complexity analysis reveals that our method is time saving. Our method has been applied to generate test cases for PPP over ATM, a subsystem of IAD that runs data and voice over DSL.  相似文献   

5.
6.
Test data generation is one of the most technically challenging steps of testing software, but most commercial systems currently incorporate very little automation for this step. This paper presents results from a project that is trying to find ways to incorporate test data generation into practical test processes. The results include a new procedure for automatically generating test data that incorporates ideas from symbolic evaluation, constraint‐based testing, and dynamic test data generation. It takes an initial set of values for each input, and dynamically ‘pushes’ the values through the control‐flow graph of the program, modifying the sets of values as branches in the program are taken. The result is usually a set of values for each input parameter that has the property that any choice from the sets will cause the path to be traversed. This procedure uses new analysis techniques, offers improvements over previous research results in constraint‐based testing, and combines several steps into one coherent process. The dynamic nature of this procedure yields several benefits. Moving through the control flow graph dynamically allows path constraints to be resolved immediately, which is more efficient both in space and time, and more often successful than constraint‐based testing. This new procedure also incorporates an intelligent search technique based on bisection. The dynamic nature of this procedure also allows certain improvements to be made in the handling of arrays, loops, and expressions; language features that are traditionally difficult to handle in test data generation systems. The paper presents the test data generation procedure, examples to explain the working of the procedure, and results from a proof‐of‐concept implementation. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
并发程序由多个共享存储空间并发执行的流程组成.由于流程之间执行次序的不确定性,使得并发软件系统的测试比较困难.变异测试是一种基于故障的软件测试技术,广泛用于评估测试用例集的充分性和测试技术的有效性.将变异测试应用于并发程序的一个关键问题是,如何高效地生成大量的模拟并发故障的变异体集合.给出了一种并发程序的变异测试框架,...  相似文献   

8.
This paper discusses the necessity of a good methodology for the development of reliable software, especialy with respect to the final software validation and testing activities. A formal specification development and validation methodology is proposed. This methodology has been applied to the development and validation of a pilot software, incorporating typical features of critical software for nuclear power plant safety protection. The main features of the approach indude the use of a formal specification language and the independent development of two sets of specifications. Analyses on the specifications consists of three-parts: validation against the functional requirements consistency and integrity of the specifications, and dual specification comparison based on a high-level symbolic execution technique. Dual design, implementation, and testing are performed. Automated tools to facilitate the validation and testing activities are developed to support the methodology. These includes the symbolic executor and test data generator/dual program monitor system. The experiences of applying the methodology to the pilot software are discussed, and the impact on the quality of the software is assessed.  相似文献   

9.
Two facts about declarative programming prevent the application of conventional testing methods. First, the classical test coverage measures such as statement, branch or path coverage, cannot be used, since in declarative programs no control flow notion exists. Second, there is no widely accepted language available for formal specification, since predicate logic, which is the most common formalism for declarative programming, is already a very high-level abstract language. This paper presents a new approach exending previous work by the authors on test input generation for declarative programs. For this purpose, the existing program instrumentation notion is extended and a new logic coverage measure is introduced. The approach is mathematically formalized and the goal of achieving 100% program logic coverage controls the automatic test input generation. The method is depicted by means of logic programming; the results are, however, generally applicable. Finally, the concepts introduced have been used practically within a test environment. © 1998 John Wiley & Sons, Ltd.  相似文献   

10.
An approach to the automatic generation of test data having a complex structure (such as XML documents, programs in various programming languages, and the like) is presented. It is based on abstract models that represent various views of the structure of the desired data. The approach enables one to generate small sets of test data for testing the functionality of the target system. The use of abstract models makes configuration of the generation procedure easy and clear; it also facilitates the maintenance and reuse of existing configurations of the test data generator. The approach was implemented in the test data generator called Pinery, which was successfully used in a number of projects including testing commercial C/C++ compilers.  相似文献   

11.
Systematic testing and formal verification to validate reactive programs   总被引:2,自引:0,他引:2  
The use of systematic testing and formal verification in the validation of reactive systems implemented in synchronous languages is illustrated. Systematic testing and formal verification are two techniques for checking the consistency between a program and its specification. The approach to validation is through specification: two system views are developed in addition to the program, a behavioural specification for systematic testing and a logical specification for formal verification. Pursuing both activities, reactive programs can be validated both more efficiently (in terms of costs) and more effectively (in terms of confidence in correctness). This principle is demonstrated here using the well known lift example.  相似文献   

12.
基于一致性测试理论的Statechart描述的测试用例自动生成   总被引:1,自引:0,他引:1  
本文研究Statechart描述的测试语义和测试用例的自动生成.基于Tretmans的从标记转换系统描述自动生成测试用例的方法,我们研究如何从Statechart描述自动生成测试用例.本文的主要贡献在于建立了基于Statechart描述的一致性测试和测试用例生成的形式化基础.为Statechart描述建立了形式化测试语...  相似文献   

13.
Automating the process of software testing is a very popular research topic and of real interest to industry. Test automation can take part on different levels, e.g., test execution, test case generation, test data generation. This survey gives an overview of state-of-the art test data generation tools, either academic or commercial. The survey focuses on white- and gray-box techniques. The list of existing tools was filtered with respect to their public availability, their maturity, and activity. The remaining seven tools, i.e., AgitarOne, CodePro AnalytiX, AutoTest, C++test, Jtest, RANDOOP, and PEX, are briefly introduced and their evaluation results are summarized. For the evaluation we defined 31 benchmark tests, which check the tools capabilities to generate test data that satisfies a given specification: 24 primitive type benchmarks and 7 non-primitive type and more complex with respect to the specification benchmarks. Most of the commercial tools implement a test data strategy that uses constant values found in the method under test or values that are slightly modified by means of mathematical operations. This strategy turns out to be very effective. In general, all tools that combine multiple techniques perform very well. For example PEX uses constraint solving techniques, but in cases where the constraint solver reaches its limitations it uses random based techniques to overcome those limitations. Especially, the two commercial tools AgitarOne and PEX that combine multiple approaches to test data generation are able to pass all 31 tests. This survey reflects the status in 2011.  相似文献   

14.
In this paper we discuss the advantages and limitations of a specification‐based software testing technique we call CEG‐BOR. There are two phases in this approach. First, informal software specifications are converted into cause‐effect graphs (CEG). Then, the Boolean OperatoR (BOR) strategy is applied to design and select test cases. The conversion of an informal specification into a CEG helps detect ambiguities and inconsistencies in the specification and sets the stage for design of test cases. The number of test cases needed to satisfy the BOR strategy grows linearly with the number of Boolean operators in CEG, and BOR testing guarantees detection of certain classes of Boolean operator faults. But, what makes the approach especially attractive is that the BOR based test suites appear to be very effective in detecting other fault types. We have empirically evaluated this broader aspect of the CEG‐BOR strategy on a simplified safety‐related real‐time control system, a set of N‐version programs, and on elements of a commercial data‐base system. In all cases, CEG‐BOR testing required fewer test cases than those generated for the applications without the use of CEG‐BOR. Furthermore, in all cases CEG‐BOR testing detected all faults that the original, and independently generated, application test‐suites did. In two instances CEG‐BOR testing uncovered additional faults. Our results indicate that the CEG‐BOR strategy is practical, scalable, and effective across diverse applications. We believe that it is a cost‐effective methodology for the development of systematic specification‐based software test‐suites.  相似文献   

15.
Stratego is a domain-specific language for the specification of program transformation systems. The design of Stratego is based on the paradigm of rewriting strategies: user-definable programs in a little language of strategy operators determine where and in what order transformation rules are (automatically) applied to a program. The separation of rules and strategies supports modularity of specifications. Stratego also provides generic features for specification of program traversals. In this paper we present a case study of Stratego as applied to a non-trivial problem in program transformation. We demonstrate the use of Stratego in eliminating intermediate data structures from (also known as deforesting) functional programs via the warm fusion algorithm of Launchbury and Sheard. This algorithm has been specified in Stratego and embedded in a fully automatic transformation system for kernel Haskell. The entire system consists of about 2600 lines of specification code, which breaks down into 1850 lines for a general framework for Haskell transformation and 750 lines devoted to a highly modular, easily extensible specification of the warm fusion transformer itself. Its successful design and construction provides further evidence that programs generated from Stratego specifications are suitable for integration into real systems, and that rewriting strategies are a good paradigm for the implementation of such systems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

16.
A compiler-based specification and testing system for defining data types has been developed. The system, DAISTS (data abstraction implementation, specification, and testing system) includes formal algebraic specifications and statement and expression test coverage monitors. This paper describes our initial attempt to evaluate the effectiveness of the system in helping users produce software. In an exploratory study, subjects without prior experience with DAISTS were encouraged by the system to develop effective sets of test cases for their implementations. Furthermore, an analysis of the errors remaining in the implementations provided valuable hints about additional useful testing metrics.  相似文献   

17.
数据流覆盖可有效地检测软件中的缺陷与错误.针对该覆盖准则中存在的插装监测开销庞大和测试数据生成效率不高的问题,提出一种新的基于定值-引用对覆盖的测试数据进化生成方法.该方法主要分为两部分,首先,通过约减测试目标来减少插装开销,提出的包含关系算法可找到一个定值—引用对子集,使得覆盖该子集就能保证所有测试目标被覆盖;然后,采用遗传算法为所有测试目标生成测试数据,设计的适应度函数综合考虑个体实际执行的路径与每个测试目标的定义明确路径的匹配程度.将该方法用于8个基准程序的测试数据生成,并与其他方法比较,结果显示其可有效提高程序覆盖率和测试数据生成效率.  相似文献   

18.
基于UML规格说明测试用例生成工具   总被引:1,自引:1,他引:0  
利用UML状态图,采用基于状态的测试数据生成标准生成测试用例。其中UML状态图是测试用例生成的关键部分,在某种意义上,UML状态图能够图容易生成测试用例。  相似文献   

19.
王晓宇  徐拾义 《计算机工程》2004,30(19):68-69,167
分析了科学计算软件测试和度量的特点并结合程序的性质提出了描述模块之间关系的模块关系图(MRD)模型,然后研究了这一模型在软件自动测试程序生成中的具体应用。在此基础上,对回归测试中重测试模块进行了探索,提出了相应的算法。将以上研究思想应用于国家自然科学基金项目“软件可测性设计新概念—软件内建自测试”,实践证明,该模型有助于软件自动化测试的进一步研究。  相似文献   

20.
Constraint-based testing (CBT) is the process of generating test cases against a testing objective by using constraint solving techniques. When programs contain dynamic memory allocation and loops, constraint reasoning becomes challenging as new variables and new constraints should be created during the test data generation process. In this paper, we address this problem by proposing a new constraint model of C programs based on operators that model dynamic memory management. These operators apply powerful deduction rules on abstract states of the memory enhancing the constraint reasoning process. This allows to automatically generate test data respecting complex coverage objectives. We illustrate our approach on a well-known difficult example program that contains dynamic memory allocation/deallocation, structures and loops. We describe our implementation and provide preliminary experimental results on this example that show the highly deductive potential of the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号