首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Fault localization is an important and challenging task during software testing. Among techniques studied in this field, program spectrum based fault localization is a promising approach. To perform spectrum based fault localization, a set of test oracles should be provided, and the effectiveness of fault localization depends highly on the quality of test oracles. Moreover, their effectiveness is usually affected when multiple simultaneous faults are present. Faced with multiple faults it is difficult for developers to determine when to stop the fault localization process. To address these issues, we propose an iterative fault localization process, i.e., an iterative process of selecting test cases for effective fault localization (IPSETFUL), to identify as many faults as possible in the program until the stopping criterion is satisfied. It is performed based on a concept lattice of program spectrum (CLPS) proposed in our previous work. Based on the labeling approach of CLPS, program statements are categorized as dangerous statements, safe statements, and sensitive statements. To identify the faults, developers need to check the dangerous statements. Meantime, developers need to select a set of test cases covering the dangerous or sensitive statements from the original test suite, and a new CLPS is generated for the next iteration. The same process is proceeded in the same way. This iterative process ends until there are no failing tests in the test suite and all statements on the CLPS become safe statements. We conduct an empirical study on several subject programs, and the results show that IPSETFUL can help identifymost of the faults in the program with the given test suite. Moreover, it can save much effort in inspecting unfaulty program statements compared with the existing spectrum based fault localization techniques and the relevant state of the art technique.  相似文献   

2.
Considering the fact that faults may be revealed as undesired mutual effect of program predicates on each other, a new approach for localizing latent bugs, namely Hierarchy-Debug, is presented in this paper. To analyze the vertical effect of predicates on each other and on program termination status, the predicates are fitted into a logistic lasso model. To support scalability, a hierarchical clustering algorithm is applied to cluster the predicates according to their presence in different executions. Considering each cluster as a pseudo-predicate, a distinct lasso model is built for intermediate levels of the hierarchy. Then, we apply a majority voting technique to score the predicates according to their lasso coefficients at different levels of the hierarchy. The predicates with relatively higher scores are ranked as fault relevant predicates. To provide the context of failure, faulty sub-paths are identified as sequences of fault relevant predicates. The grouping effect of Hierarchy-Debug helps programmers to detect multiple bugs. Four case studies have been designed to evaluate the proposed approach on three well-known test suites, SpaceSiemens, and Bash. The evaluations show that Hierarchy-Debug produces more precise results compared with prior fault localization techniques on the subject programs.  相似文献   

3.
错误定位就是寻找程序错误的位置.现有的错误定位方法大多利用测试用例的覆盖信息,以标识一组导致程序失效的可疑语句,却忽视了这些语句相互作用导致失效的上下文.因此,提出一种增强上下文的错误定位方法Context-FL,以构建上下文的方式来优化错误定位性能.Context-FL利用动态切片技术构建数据与控制相关性的错误传播上下文,显示了导致失效的语句之间传播依赖关系;然后,基于可疑值度量来区分上下文片段中不同语句的可疑度;最后,Context-FL以标记可疑值的上下文作为定位结果.实验结果表明,Context-FL优于8种典型错误定位方法.  相似文献   

4.
ContextFault localization is an important and expensive activity in software debugging. Previous studies indicated that statistically-based fault-localization techniques are effective in prioritizing the possible faulty statements with relatively low computational complexity, but prior works on statistical analysis have not fully investigated the behavior state information of each program element.ObjectiveThe objective of this paper is to propose an effective fault-localization approach based on the analysis of state dependence information between program elements.MethodIn this paper, state dependency is proposed to describe the control flow dependence between statements with particular states. A state dependency probabilistic model uses path profiles to analyze the state dependency information. Then, a fault-localization approach is proposed to locate faults by differentiating the state dependencies in passed and failed test cases.ResultsWe evaluated the fault-localization effectiveness of our approach based on the experiments on Siemens programs and four UNIX programs. Furthermore, we compared our approach with current state-of-art fault-localization methods such as SOBER, Tarantula, and CP. The experimental results show that, our approach can locate more faults than the other methods in every range on Siemens programs, and the overall efficiency of our approach in the range of 10–30% of analyzed source code is higher than the other methods on UNIX programs.ConclusionOur studies show that our approach consistently outperforms the other evaluated techniques in terms of effectiveness in fault localization on Siemens programs. Moreover, our approach is highly effective in fault localization even when very few test cases are available.  相似文献   

5.
Predicate-based statistical fault-localization techniques find fault-relevant predicates in a program by contrasting the statistics of the evaluation results of individual predicates between failed runs and successful runs. While short-circuit evaluations may occur in program executions, treating predicates as atomic units ignores this fact, masking out various types of useful statistics on dynamic program behavior. In this paper, we differentiate the short-circuit evaluations of individual predicates on individual program statements, producing one set of evaluation sequences per predicate. We then investigate experimentally the effectiveness of using these sequences to locate faults by comparing existing predicate-based techniques with and without such differentiation. We use both the Siemens program suite and four real-life UNIX utility programs as our subjects. The experimental results show that the proposed use of short-circuit evaluations can, on average, improve predicate-based statistical fault-localization techniques while incurring relatively small performance overhead.  相似文献   

6.
A methodology for proving the termination of well-moded logic programs is developed by reducing the termination problem of logic programs to that of term rewriting systems. A transformation procedure is presented to derive a term rewriting system from a given well-moded logic program such that the termination of the derived rewrite system implies the termination of the logic program for all well-moded queries under a class of selection rules. This facilitates applicability of a vast source of termination orderings proposed in the literature on term rewriting, for proving termination of logic programs. The termination of various benchmark programs has been established with this approach. Unlike other mechanizable approaches, the proposed approach does not require any preprocessing and works well, even in the presence of mutual recursion. The transformation has also been implemented as a front end to Rewrite Rule Laboratory (RRL) and has been used in establishing termination of nontrivial Prolog programs such as a prototype compiler for ProCoS, PL0 language.  相似文献   

7.
This paper presents the results of a large scale empirical study of coherent dependence clusters. All statements in a coherent dependence cluster depend upon the same set of statements and affect the same set of statements; a coherent cluster's statements have ‘coherent’ shared backward and forward dependence. We introduce an approximation to efficiently locate coherent clusters and show that it has a minimum precision of 97.76%. Our empirical study also finds that, despite their tight coherence constraints, coherent dependence clusters are in abundance: 23 of the 30 programs studied have coherent clusters that contain at least 10% of the whole program. Studying patterns of clustering in these programs reveals that most programs contain multiple substantial coherent clusters. A series of subsequent case studies uncover that all clusters of significant size map to a logical functionality and correspond to a program structure. For example, we show that for the program acct, the top five coherent clusters all map to specific, yet otherwise non-obvious, functionality. Cluster visualization also brings out subtle deficiencies in program structure and identifies potential refactoring candidates. A study of inter-cluster dependence is used to highlight how coherent clusters are connected to each other, revealing higher-level structures, which can be used in reverse engineering. Finally, studies are presented to illustrate how clusters are not correlated with program faults as they remain stable during most system evolution.  相似文献   

8.
在实际调试中,程序员往往通过追溯错误的变量值及其传播来定位软件错误,其中具有错误值的变量称为感染变量,感染变量在失败运行中具有错误值的程序位置即为感染位置。提出了一种结合动态正向程序切片和语句覆盖信息对程序变量感染的初始位置进行定位的技术。该技术通过分析感染变量的起源与传播,可以更加精确地找到与感染变量相关的错误语句集合。与传统的基于程序覆盖信息的错误定位技术进行了对比实验,结果表明,该技术可定位程序中的感染变量及其初始感染位置,并且可以显著提高程序错误定位的精度。  相似文献   

9.
We here extend our earlier work on the theory of partially defined computer instructions and guards to cover partially defined computer expressions and programs. The notion of the relevant region of an expression is generalized to conditional relevant regions, and we specify upper bounds on these for expressions built up from simpler ones. We then proceed to specify upper bounds on the input and output regions of programs containing goto statements, even though the state transformations of such programs are not necessarily computable. This is then combined with our earlier commutativity results to obtain a general condition under which two simple sections of such a program commute with each other, and therefore may be interchanged, or possibly done in parallel.  相似文献   

10.
Program slicing is an effective technique for analyzing concurrent programs. However, when a conventional closure-based slicing algorithmfor sequential programs is applied to a concurrent interprocedural program, the slice is usually imprecise owing to the intransitivity of interference dependence. Interference dependence arises when a statement uses a variable defined in another statement executed concurrently. In this study, we propose a global dependence analysis approach based on a program reachability graph, and construct a novel dependence graph calledmarking-statement dependence graph (MSDG), in which each vertex is a 2-tuple of program state and statement. In contrast to the conventional program dependence graph where the vertex is a statement, the dependence relation in MSDG is transitive. When traversing MSDG, a precise slice will be obtained. To enhance the slicing efficiency without loss of precision, our slicing algorithm adopts a hybrid strategy. The procedures containing interaction statements between threads are inlined and sliced by the slicing algorithm based on program reachability graphs while allowing other procedures to be sliced as sequential programs. We have implemented our algorithm and three other representative slicing algorithms, and conducted an empirical study on concurrent Java programs. The experimental results show that our algorithm computes more precise slices than the other algorithms. Using partial-order reduction techniques, which are effective for reducing the size of a program reachability graph without loss of precision, our algorithm is optimized, thereby improving its performance to some extent.  相似文献   

11.
Summary A weak logic of programs is a formal system in which statements that mean the program halts cannot be expressed. In order to prove termination, we would usually have to use a stronger logical system. In this paper we show how we can prove termination of both iterative and recursive programs within a weak logic by augmenting the programs with counters and adding bound assertions on the counters as loop invariants and entry conditions. Thus, most of the existing verifiers which are based on a weak logic of programs can be used to prove termination of programs without any modification. We give examples of proofs of termination and of upper bounds on computation time that were obtained using a Pascal program verifier. The use of the method to locate causes of non-termination is also illustrated.This research was supported inpart by the Advanced Research Agency of the Office of the Secretary of Defence under contract DAHC 15-73-C-0435  相似文献   

12.
Automated program repair is still a highly challenging problem mainly due to the reliance of the current techniques on test cases to validate candidate patches. This leads to the increasing unreliability of the final patches since test cases are partial specifications of the software. In the present paper, an automated program repair method is proposed by integrating genetic programming (GP) and model checking (MC). Due to its capabilities to verify the finite state systems, MC is employed as an appropriate criterion for evolving programs to calculate the fitness in GP. The application of MC for the fitness evaluation, which is novel in the context of program repair, addresses an important gap in the current heuristic approaches to the program repair. Being focused on fault detection based on the desired aspects, it enables the programmers to detect faults according to the definition of properties. Creating a general method, this characteristic can be effectively customized for different domains of application and the corresponding faults. Apart from various types of faults, the proposed method is capable of handling concurrency bugs which are not the case in many general repair methods. To evaluate the proposed method, it was implemented as a tool, named JBF, to repair Java programs. To meet the objectives of the study, some experiments were conducted in which certain programs with known bugs were automatically repaired by the JBF tool. The obtained results are encouraging and remarkably promising.  相似文献   

13.
一种基于数据流分析的程序定义域自动确定方法   总被引:3,自引:1,他引:2  
程序定义域的确定有利于指导测试用例的选取,虽然程序规范规定了输入变量的定义域,但程序实现本身也定义了其定义域,如果二者不能完全重合,那么某些软件故障就可诊断出来,文中提出玫种基于数据流分析的程序定义域自动确定方法,通过对原程序进行数据流分析和相关性分析,求取输入变量的定义域,采用程序抽取的程序定义域自动确定方法,通过对源程序进行数据流分析和相关性分析,求取输入变量的定义域,采用程序抽取技术,将与输入变量无关的语句和函数剔除,简化了源程序,提高了分析效率,采用动态模拟技术,实现了特殊情形下输入变量定义域的确定,实验证明,该方法是行之有效的。  相似文献   

14.
Presents a modeling approach based on stochastic Petri nets to estimate the reliability and availability of programs in a distributed computing system environment. In this environment, successful execution of programs is conditioned on the successful access of related files distributed throughout the system. The use of stochastic Petri nets is demonstrated by extending a basic reliability model to account for repair actions when faults occur. To this end, two possible models are discussed: the global repair model, which assumes a centralized repair team that restores the system to its original status when a failure state is reached, and the local repair model, which assumes that repairs are localized to the node where they occur. The former model is useful in evaluating the availability of programs (or the availability of the hardware support) subject to hardware faults that are repaired globally; therefore, the programs of interest can be interrupted. On the other hand, the latter model can be used to evaluate program reliability in the presence of hardware faults subject to repair, without interrupting the normal operation of the system  相似文献   

15.
定值-引用类错误是一类非常重要且常见的错误.当前,对这类错误的检测很难同时达到高精度和高可扩展性.通过合理组合敏感和不敏感的检测方法并控制两类方法的实施范围,可以同时达到高检测精度和高可扩展性.提出一种新颖的场景敏感的检测方法,该方法根据触发状态对潜在错误语句分类,识别不同类别语句的触发场景并实施不同开销的检测,在不降低精度的同时最小化检测开销.设计了一个多项式时间复杂度的流敏感、域敏感和上下文敏感的场景分析以进行分类,并基于程序依赖信息识别触发场景,仅对必要的触发场景实施路径敏感的检测.为上述方法实现了一种原型系统——Minerva.通过使用空指针引用错误检测为实例研究以及总代码规模超过290万行,最大单个应用超过200万行的应用验证,用例实验结果表明,Minerva的平均检测时间比当前先进水平的路径敏感检测工具Clang-sa和Saturn分别快3倍和46倍.而Minerva的误报率仅为24%,是Clang-sa和Saturn误报率的1/3左右,并且Minerva未发现漏报已知错误.上述数据表明,所提出的场景敏感的错误检测方法可同时获得高可扩展性和高检测精度.  相似文献   

16.
This paper introduces a number of new techniques that support systematic manipulation of predicates, operators, healthiness conditions, laws and fixpoints of recursive programs. Necessary restrictions are imposed on the definition of each model so that the inheritance relation can be established by checking a few conditions on the healthiness conditions and the commands. In particular, we intend to identify the conditions that simplify the closure proof of the program operators and enable laws and fixpoints of a model to be inherited by its submodel without re-proof. A recursive program may correspond to different recursive functions in a model and its submodels, because the primitives such as abort and assignment statements are normally represented as different predicates in different models (although the original definition of each operator is unchanged). This paper studies meta-theory in this more general setting. A fixpoint theorem is discovered and applied to clarify the relationship between the partially correct relational model and the totally correct sequential model in UTP. It is shown that, although assignment statements are modified by the new healthiness conditions in the latter model, most laws (including many that are re-proved in UTP) and fixpoints can be inherited directly without re-proof, if the first argument of every sequential composition in each command tree corresponds to a terminating computation. This result also singles out a small number of laws and fixpoints that may no longer hold in the totally correct model.  相似文献   

17.
一种回归测试后的错误定位方法   总被引:1,自引:0,他引:1  
测试和调试之间的关系是极端密切的。回归测试是软件测试和维护过程中的一个重要活动。在程序中找出错误是一个复杂的过程,它涉及到理解程序的用途、结构、语意和导致错误的测试的相关特征。本文提出了一种基于Chopping技术进行错误定位的方法。这种方法反复利用调试信息和回归测试结果,通过从程序中抽取出与特定的语句有关的、 、相对原来的程序小得多的语句集,实现准确、快速的错误定位。  相似文献   

18.
ContextExisting fault-localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They often ignore the noise introduced by other features on the same set of executions that may lead to the observed failures. It is unclear to what extent such noise can be alleviated.ObjectiveThis paper aims to develop a framework that reduces the noise in fault-failure correlation measurements.MethodWe develop a fault-localization framework that uses chains of key basic blocks as program features and a noise-reduction methodology to improve on the similarity coefficients of fault-localization techniques. We evaluate our framework on five base techniques using five real-life median-scaled programs in different application domains. We also conduct a case study on subjects with multiple faults.ResultsThe experimental result shows that the synthesized techniques are more effective than their base techniques by almost 10%. Moreover, their runtime overhead factors to collect the required feature values are practical. The case study also shows that the synthesized techniques work well on subjects with multiple faults.ConclusionWe conclude that the proposed framework has a significant and positive effect on improving the effectiveness of the corresponding base techniques.  相似文献   

19.
值替代方法通过查找能够影响失败运行错误输出的语句,并确定这类语句的IVMP(兴趣值映射对)。IVMP是由语句中原错误运行中的值映射集合和能产生正确输出的可替代值映射集合组成。这种方法应用于错误定位分析,对确定错误语句本身和与错误语句直接相关的语句非常有效。提出了一种同时考虑控制依赖和数据依赖的值替代算法,并加入了语句错误可能性分析,能全面分析程序中的错误。  相似文献   

20.
Studies indicate that techniques for tolerating hardware faults are so effective that software design errors are the leading cause of all faults encountered. To handle these unanticipated software faults, two main approaches have been proposed: N-version programming and recovery blocks. Both are based on the concept of design diversity: the assumption that different designs will exhibit different faults (if any) for the same inputs and will, therefore, provide alternatives for each other. Both approaches have advantages, but this paper focuses upon recovery blocks; specifically, the requirement to save and restore application state. Judicious saving of state has been described as “checkpointing” for over a decade. Using the object-oriented features of the revised Ada language (Ada 95) – a language widely used in this domain – we present three portable implementations of a checkpointing facility and discuss the trade-offs offered by each. Results of the implementation of these mechanisms are used to highlight both the strengths and weaknesses of some of the object-oriented features of Ada. We then show a reusable implementation of recovery blocks illustrating the checkpointing schemes. A performance analysis is made and measurements are presented in support of the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号