首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
面向路径的测试数据自动生成工具的设计与实现   总被引:1,自引:0,他引:1  
面向路径的测试数据生成问题是软件测试中的一个基本问题。Gupta等提出一种线性化谓词函数的迭代松驰方法求解该问题。文献[2]改进了该方法,证明改进后的方法与原方法生成的约束系统相同,文章以改进后的方法为核心算法,根据软件工程的思想,采用面向对象的方法,使用UML进行设计,并且在Linux Red Hat7.0操作系统下用C++语言言实现一个为程序路径自动生成测试数据的原型工具,然后将它移植到Windows操作系统。  相似文献   

2.
测试数据生成是软件测试的核心与关键,本文介绍了迭代松弛法以及对迭代松弛法进行改进,改进后的方法比原方法生成测试数据的能力更强,不仅能够用于白盒测试数据的自动生成,还能够用于黑盒测试数据的自动生成。在此基础上提出一个面向路径的测试数据生成框架。并讨论该框架在单元测试、组装测试中的应用。  相似文献   

3.
针对故障测试约束构造过程相当复杂,计算开销太大,测试用例设计质量不高,不利于广泛运用,提出一种基于多切片最优融合集的故障测试约束构造方法,采用同一切片标准的不同程序切片构造融合度矩阵来度量切片的一致融合度,通过利用切片融合度、路径条件以及软件故障触发与传播的内在机制进而构造某个故障相应的测试约束,可控制测试用例的规模,提高测试用例设计质量。实验结果表明,这种故障测试约束与传统基于谓词约束和必要性约束比较,生成的测试用例规模较小,很少生成无效测试用例,发现Bug的效率很高。  相似文献   

4.
基于谓词切片的字符串测试数据自动生成   总被引:3,自引:0,他引:3  
字符串谓词使用相当普遍,如何实现字符串测试数据的自动生成是一个有待解决的问题,针对字符串谓词,讨论了路径Path上给定谓词的谓词切片的动态生成算法,以及基于谓词切片的字符串测试数据自动生成方法,并给出了字符串间距离的定义,利用程序DUC(Definithon-Use-Control)表达式,构造谓词的谓词切片,对任意的输入,通过执行谓词切片,获取谓词中变量的当前值,进而对谓词中变量的每一字符进行分支函数极小化,动态生成给定字符串谓词边界的ON-OFF测试点,实验表明,该方法是行之有效的。  相似文献   

5.
一种用于测试数据生成的动态程序切片算法   总被引:3,自引:0,他引:3  
王雪莲  赵瑞莲  李立健 《计算机应用》2005,25(6):1445-1447,1450
介绍了程序切片技术的基本概念,提出了一种基于前向分析的动态程序切片算法,探讨了程序切片在软件测试数据生成中的应用,结果表明可以有效地提高基于路径的测试数据生成效率。  相似文献   

6.
一种基于路径的测试数据自动生成算法   总被引:3,自引:0,他引:3  
陈继锋  朱利  沈钧毅  陈玲 《控制与决策》2005,20(9):1065-1068
提出了一种新的基于路径测试数据自动生成的算法,该算法将路径中的线性谓词函数直接作为线性算术表示来构造谓词函数关于输入变量的线性约束,仅当谓词函数是输入变量的非线性函数时,才计算其线性算术表示,因而不必计算所有谓词函数的线性算术表示,也不必计算谓词片和确定输入依赖集,以及构造谓词函数关于输入变量的增量的线性约束,理论分析和实例证明,该算法具有简单、容易、有效且计算量小等特点。  相似文献   

7.
Statistical Debugging: A Hypothesis Testing-Based Approach   总被引:1,自引:0,他引:1  
Manual debugging is tedious, as well as costly. The high cost has motivated the development of fault localization techniques, which help developers search for fault locations. In this paper, we propose a new statistical method, called SOBER, which automatically localizes software faults without any prior knowledge of the program semantics. Unlike existing statistical approaches that select predicates correlated with program failures, SOBER models the predicate evaluation in both correct and incorrect executions and regards a predicate as fault-relevant if its evaluation pattern in incorrect executions significantly diverges from that in correct ones. Featuring a rationale similar to that of hypothesis testing, SOBER quantifies the fault relevance of each predicate in a principled way. We systematically evaluate SOBER under the same setting as previous studies. The result clearly demonstrates the effectiveness: SOBER could help developers locate 68 out of the 130 faults in the Siemens suite by examining no more than 10 percent of the code, whereas the cause transition approach proposed by Holger et al. [2005] and the statistical approach by Liblit et al. [2005] locate 34 and 52 faults, respectively. Moreover, the effectiveness of SOBER is also evaluated in an "imperfect world", where the test suite is either inadequate or only partially labeled. The experiments indicate that SOBER could achieve competitive quality under these harsh circumstances. Two case studies with grep 2.2 and bc 1.06 are reported, which shed light on the applicability of SOBER on reasonably large programs  相似文献   

8.
Order-sorted logic programming with predicate hierarchy   总被引:1,自引:0,他引:1  
Order-sorted logic has been formalized as first-order logic with sorted terms where sorts are ordered to build a hierarchy (called a sort-hierarchy). These sorted logics lead to useful expressions and inference methods for structural knowledge that ordinary first-order logic lacks. Nitta et al. pointed out that for legal reasoning a sort-hierarchy (or a sorted term) is not sufficient to describe structural knowledge for event assertions, which express facts caused at some particular time and place. The event assertions are represented by predicates with n arguments (i.e., n-ary predicates), and then a particular kind of hierarchy (called a predicate hierarchy) is built by a relationship among the predicates. To deal with such a predicate hierarchy, which is more intricate than a sort-hierarchy, Nitta et al. implemented a typed (sorted) logic programming language extended to include a hierarchy of verbal concepts (corresponding to predicates). However, the inference system lacks a theoretical foundation because its hierarchical expressions exceed the formalization of order-sorted logic. In this paper, we formalize a logic programming language with not only a sort-hierarchy but also a predicate hierarchy. This language can derive general and concrete expressions in the two kinds of hierarchies. For the hierarchical reasoning of predicates, we propose a manipulation of arguments in which surplus and missing arguments in derived predicates are eliminated and supplemented. As discussed by Allen, McDermott and Shoham in research on temporal logic and as applied by Nitta et al. to legal reasoning, if each predicate is interpreted as an event or action (not as a static property), then missing arguments should be supplemented by existential terms in the argument manipulation. Based on this, we develop a Horn clause resolution system extended to add inference rules of predicate hierarchies. With a semantic model restricted by interpreting a predicate hierarchy, the soundness and completeness of the Horn-clause resolution is proven.  相似文献   

9.
软件测试分为静态分析、路径选择、测试数据生成和动态分析四个阶段,而测试数据的自动生成是软件测试的关键技术之一。文章通过对被测试程序的分析,提出了生成测试数据的平衡力法,对任意的输入变量,判断变量移动范围及进行谓词中变量的函数极小化,得到测试数据,并给出了具体实现方法。  相似文献   

10.
在对程序分片技术研究的基础上,提出一种新的片变体测试方法。通过实例说明,该方法能更有效地提高变体测试的准确性及测试效率。  相似文献   

11.
Predicates appear in both the specification and implementation of a program. One approach to software testing, referred to as predicate testing, is to require certain types of tests for a predicate. In this paper, three fault-based testing criteria are defined for compound predicates, which are predicates with one or more AND/OR operators. BOR (boolean operator) testing requires a set of tests to guarantee the detection of (single or multiple) boolean operator faults, including incorrect AND/OR operators and missing/extra NOT operators. BRO (boolean and relational operator) testing requires a set of tests to guarantee the detection of boolean operator faults and relational operator faults (i.e., incorrect relational operators). BRE (boolean and relational expression) testing requires a set of tests to guarantee the detection of boolean operator faults, relational operator faults, and a type of fault involving arithmetical expressions. It is shown that for a compound predicate with n, n>0, AND/OR operators, at most n+2 constraints are needed for BOR testing and at most 2*n+3 constraints for BRO or BRE testing, where each constraint specifies a restriction on the value of each boolean variable or relational expression in the predicate. Algorithms for generating a minimum set of constraints for BOR, BRO, and BRE testing of a compound predicate are given, and the feasibility problem for the generated constraints is discussed. For boolean expressions that contain multiple occurrences of some boolean variables, how to combine BOR testing with the meaningful impact strategy (Weyuker et al., 1994) is described  相似文献   

12.
《Real》2005,11(4):270-281
Recently, Shen et al. [IEEE Transactions on Image Processing 2003;12:283–95] presented an efficient adaptive vector quantization (AVQ) algorithm and their proposed AVQ algorithm has a better peak signal-to-noise ratio (PSNR) than that of the previous benchmark AVQ algorithm. This paper presents an improved AVQ algorithm based on the proposed hybrid codebook data structure which consists of three codebooks—the locality codebook, the static codebook, and the history codebook. Due to easy maintenance advantage, the proposed AVQ algorithm leads to a considerable computation-saving effect while preserving the similar PSNR performance as in the previous AVQ algorithm by Shen et al. [IEEE Transactions on Image Processing 2003;12:283–95]. Experimental results show that the proposed AVQ algorithm over the previous AVQ algorithm has about 75% encoding time improvement ratio while both algorithms have the similar PSNR performance.  相似文献   

13.
Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging.This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable.This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information.A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments.  相似文献   

14.
This paper describes a novel structural approach to recognize the human facial features for emotion recognition. Conventionally, features extracted from facial images are represented by relatively poor representations, such as arrays or sequences, with a static data structure. In this study, we propose to extract facial expression features vectors as Localized Gabor Features (LGF) and then transform these feature vectors into FacE Emotion Tree Structures (FEETS) representation. It is an extension of the Human Face Tree Structures (HFTS) representation presented in (Cho and Wong in Lecture notes in computer science, pp 1245–1254, 2005). This facial representation is able to simulate as human perceiving the real human face and both the entities and relationship could contribute to the facial expression features. Moreover, a new structural connectionist architecture based on a probabilistic approach to adaptive processing of data structures is presented. The so-called probabilistic based recursive neural network (PRNN) model extended from Frasconi et al. (IEEE Trans Neural Netw 9:768–785, 1998) is developed to train and recognize human emotions by generalizing the FEETS representation. For empirical studies, we benchmarked our emotion recognition approach against other well known classifiers. Using the public domain databases, such as Japanese Female Facial Expression (JAFFE) (Lyons et al. in IEEE Trans Pattern Anal Mach Intell 21(12):1357–1362, 1999; Lyons et al. in third IEEE international conference on automatic face and gesture recognition, 1998) database and Cohn–Kanade AU-Coded Facial Expression (CMU) Database (Cohn et al. in 7th European conference on facial expression measurement and meaning, 1997), our proposed system might obtain an accuracy of about 85–95% for subject-dependent and subject-independent conditions. Moreover, by testing images having artifacts, the proposed model significantly supports the robust capability to perform facial emotion recognition.  相似文献   

15.
Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which requires finding whether there exists a consistent cut of a given computation that satisfies a given global predicate.Detecting a predicate in a computation is, however, an NP-complete problem in general. In order to ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice. Formally, the slice of a computation with respect to a predicate is a (sub)computation with the least number of consistent cuts that contains all consistent cuts of the computation satisfying the predicate. Intuitively, slice is a concise representation of those consistent cuts of a computation that satisfy a certain condition. To detect a predicate, rather than searching the state-space of the computation, it is much more efficient to search the state-space of the slice.We prove that the slice of a computation is uniquely defined for all predicates. We also present efficient algorithms for computing the slice for several useful classes of predicates. For an arbitrary predicate, we establish that the problem of computing the slice is NP-complete in general. Nonetheless, for such a predicate, we develop an efficient heuristic algorithm for computing an approximate slice. Our experimental results demonstrate that slicing can lead to an exponential improvement over existing techniques for predicate detection in terms of time and space.Received: 19 November 2003, Revised: 29 July 2004, Published online: 7 February 2005Vijay K. Garg: Supported in part by the NSF Grants ECS-9907213, CCR-9988225, Texas Education Board Grant ARP-320, an Engineering Foundation Fellowship, and an IBM grant.Parts of this paper have appeared earlier in conference proceedings [GM01,MG01a,MG03a].  相似文献   

16.
面向路径的测试数据生成问题是软件测试中一个基本问题。文章介绍了自主开发的面向路径的测试数据自动生成工具。Tcl/Tk是一种图形界面设计工具,其功能强大,可运行于Windows,UNIX等操作系统上,具有良好的可移植性。文章简要介绍Tcl/Tk,并给出用Tcl/Tk设计面向路径的测试数据自动生成工具的图形界面的方法。  相似文献   

17.
基于DDGRAPH图的路径覆盖研究   总被引:3,自引:0,他引:3  
软件测试分为静态分析、路径选择、测试数据生成和动态分析四个阶段,而路径选择的自动生成是软件测试的关键技术之一。路径覆盖是软件测试中一种十分重要的方法,它使程序的每个分支至少执行一次。文中通过对DDGRAPH图的分析,提出了DDGRAPH图中弧的支配树和蕴含树的表示方法,然后给出由支配树和蕴含树确定非限制弧的方法,通过近似最少谓词覆盖策略以确定覆盖所有非限制弧的路径测试子集。  相似文献   

18.
乘积高阶模糊函数(PHAF)是以分析多分量多项式相位信号(mc-PPS)而提出来的,但实际上它抑制交叉项的能力有限,仍然难以实现mc-PPS估计。逐次滤波方法是抑制交叉项的有力工具,但存在着分量间的误差扩散;松弛法(RELAX)采用循环迭代方式,对串行估计中的误差扩散有着较强的抑制能力,将二者结合起来提出来了迭代松弛PHAF方法。通过分析被估计信号参数变化时的性能表明改进后的PHAF具有较好的鲁棒性:减少了估计盲区,具有更好的估计精度,具有较低的信噪比(SNR)门限。这些性能由mc-PPS仿真例子所验证。  相似文献   

19.
Two robust remote user authentication protocols using smart cards   总被引:2,自引:0,他引:2  
With the rapid growth of electronic commerce and enormous demand from variants of Internet based applications, strong privacy protection and robust system security have become essential requirements for an authentication scheme or universal access control mechanism. In order to reduce implementation complexity and achieve computation efficiency, design issues for efficient and secure password based remote user authentication scheme have been extensively investigated by research community in these two decades. Recently, two well-designed password based authentication schemes using smart cards are introduced by Hsiang and Shih (2009) and Wang et al. (2009), respectively. Hsiang et al. proposed a static ID based authentication protocol and Wang et al. presented a dynamic ID based authentication scheme. The authors of both schemes claimed that their protocol delivers important security features and system functionalities, such as mutual authentication, data security, no verification table implementation, freedom on password selection, resistance against ID-theft attack, replay attack and insider attack, as well as computation efficiency. However, these two schemes still have much space for security enhancement. In this paper, we first demonstrate a series of vulnerabilities on these two schemes. Then, two enhanced protocols with corresponding remedies are proposed to eliminate all identified security flaws in both schemes.  相似文献   

20.
A new method for the testing of combinational digital circuits is presented. The method is based on the concept of the ‘index vector’ of a switching function (Gupta 1987), and represents an extension of syndrome testing. A large percentage of syndrome untestable faults are found to be index vector testable. An approach to testing index vector untestable circuits that relies only on the function realized by the circuit and is independent of the circuit topology is presented. The method can be used for the detection of both single and multiple stuck-at faults in a combinational circuit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号