首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 62 毫秒
1.
This paper proposes a strategy for automatically fixing faults in a program by combining the ideas of mutation and fault localization. Statements ranked in order of their likelihood of containing faults are mutated in the same order to produce potential fixes for the faulty program. The proposed strategy is evaluated using 8 mutant operators against 19 programs each with multiple faulty versions. Our results indicate that 20.70% of the faults are fixed using selected mutant operators, suggesting that the strategy holds merit for automatically fixing faults. The impact of fault localization on efficiency of the overall fault-fixing process is investigated by experimenting with two different techniques, Tarantula and Ochiai, the latter of which has been reported to be better at fault localization than Tarantula, and also proves to be better in the context of fault-fixing using our proposed strategy. Further experiments are also presented to evaluate stopping criteria with respect to the mutant examination process and reveal that a significant fraction of the (fixable) faults can be fixed by examining a small percentage of the program code. We also report on the relative fault-fixing capabilities of mutant operators used and present discussions on future work.  相似文献   

2.
软件错误定位是目前软件工程领域的重要研究课题,其中基于程序谱的错误定位(CFL)是一类重要的方法。偶然性正确测试用例对于CFL有着很大的负面影响,因此如何发现该类测试用例或者避免它们对CFL的影响对于提高CFL方法的定位效果有着重要的意义。通过分析偶然性正确对CFL定位方法的影响,发现一种没有误判率(false positive)的寻找偶然性正确测试用例的方法。在此基础上,提出了一种基于偶然性正确测试用例发现的CFL定位方法。通过实验表明,该方法可以普遍改善已有CFL方法的定位效果。  相似文献   

3.
Spectrum-based fault localization is amongst the most effective techniques for automatic fault localization. However, abstractions of program execution traces, one of the required inputs for this technique, require instrumentation of the software under test at a statement level of granularity in order to compute a list of potential faulty statements. This introduces a considerable overhead in the fault localization process, which can even become prohibitive in, e.g., resource constrained environments. To counter this problem, we propose a new approach, coined dynamic code coverage (DCC), aimed at reducing this instrumentation overhead. This technique, by means of using coarser instrumentation, starts by analyzing coverage traces for large components of the system under test. It then progressively increases the instrumentation detail for faulty components, until the statement level of detail is reached. To assess the validity of our proposed approach, an empirical evaluation was performed, injecting faults in six real-world software projects. The empirical evaluation demonstrates that the dynamic code coverage approach reduces the execution overhead that exists in spectrum-based fault localization, and even presents a more concise potential fault ranking to the user. We have observed execution time reductions of 27% on average and diagnostic report size reductions of 77% on average.  相似文献   

4.
Recent techniques for fault localization statistically analyze coverage information of a set of test runs to measure the correlations between program entities and program failures. However, coverage information cannot identify those program entities whose execution affects the output and therefore weakens the aforementioned correlations. This paper proposes a slice-based statistical fault localization approach to address this problem. Our approach utilizes program slices of a set of test runs to capture the influence of a program entity's execution on the output, and uses statistical analysis to measure the suspiciousness of each program entity being faulty. In addition, this paper presents a new slicing approach called approximate dynamic backward slice to balance the size and accuracy of a slice, and applies this slice to our statistical approach. We use two standard benchmarks and three real-life UNIX utility programs as our subjects, and compare our approach with a sufficient number of fault localization techniques. The experimental results show that our approach can significantly improve the effectiveness of fault localization.  相似文献   

5.
针对程序切片方法不提供语句的可疑程度描述,而覆盖分析方法不能充分分析程序元素间的相互影响等问题,提出上下文统计分析的软件故障定位方法。首先,将源程序转换为抽象语法树和程序依赖图;接下来,插桩程序,收集运行时信息;然后,根据失效点,执行按需的反向动态切片,确定失效产生的上下文;最后,对于反向动态切片中的节点,统计计算可疑度,输出带可疑度排序的动态程序切片。该方法不但描述了失效产生的上下文,还计算上下文中各个语句的可疑度。实验结果表明,所提方法与单一的覆盖分析方法相比,平均Expense降低了1.3%,与单一的切片方法相比,平均Expense降低了5.6%,所提方法可以有效辅助开发人员定位与修正软件缺陷。  相似文献   

6.
Pure spectrum-based fault localization (SBFL) is a well-studied statistical debugging technique that only takes a set of test cases (some failing and some passing) and their code coverage as input and produces a ranked list of suspicious program elements to help the developer identify the location of a bug that causes a failed test case. Studies show that pure SBFL techniques produce good ranked lists for small programs. However, our previous study based on the iBugs benchmark that uses the Aspect J repository shows that, for realistic programs, the accuracy of the ranked list is not suitable for human developers. In this paper, we confirm this based on a combined empirical evaluation with the iBugs and the Defects4 J benchmark. Our experiments show that, on average, at most ∼40%, ∼80%, and ∼90% of the bugs can be localized reliably within the first 10, 100, and 1000 ranked lines, respectively, in the Defects4 J benchmark. To reliably localize 90% of the bugs with the best performing SBFL metric D, ∼450 lines have to be inspected by the developer. For human developers, this remains unsuitable, although the results improve compared with the results for the Aspect J benchmark. Based on this study, we can clearly see the need to go beyond pure SBFL and take other information, such as information from the bug report or from version history of the code lines, into consideration.  相似文献   

7.
琚小明  姚庆栋 《计算机应用》2005,25(7):1674-1675,1694
传统编译器测试方法是通过比较预期的结果和待测的结果是否一致,以确定编译器是否存在错误。在此基础上,提出了引入参考编译器和参考仿真器的测试方法,在指令集软件仿真过程中生成可用于编译器调试的动态数据信息文件,对参考动态数据信息文件和待测动态数据信息文件进行比较,编译器测试工具可根据比较的结果来确定待测编译器存在错误的位置,这对编译器的调试是非常有用的。  相似文献   

8.
Second generation microfluidic biochip is known as digital microfluidic biochip (DMFB). DMFB performs different clinical pathological experimentation, DNA sequencing, air contamination detection and many other bio-chemical experiments based on appropriate bio-assay protocols. DMFB comprises two dimensional array of electrodes fabricated over two parallel glass plates, which is capable of performing miniaturized (nano/ pico liter volume) droplet dispensing, transportation and mixing through electro-wetting of dielectrics (EWOD). Droplet operations through EWOD are mostly managed by droplet scheduling algorithms which are NP hard in nature. At present reliability is a major concern for commercialization of operational DMFB. Reliable output of DMFB is mostly affected by different faults within electrodes. This further suffers from cross-contamination issues among different droplets due to repeated use of DMFB boards for different assay operations. Present work proposes a novel test droplet routing method based on adaptive weighted particle swarm optimization (PSO) model. This test droplet circulation method aims to identify defective electrodes and simultaneously performs residue removal. Experimental findings of the proposed model on some standard test benches and real life bio-assay samples reveal operational supremacy in terms of overall computational time and operational accuracy over some existing best known models.  相似文献   

9.
针对现有近似三角形内点测试( APIT)算法在信标节点密集环境下定位精度不高、稀疏环境下覆盖率较低的问题,提出了一种混合型定位算法。该算法通过减小三角形内点测试( PIT)时的三角形误判、选择优良的三角形,提高了信标节点密集环境下的定位精度。同时,该算法结合DV-Hop算法与两点定位法在稀疏环境下能计算出未知节点坐标的优点,提高了信标节点稀疏环境下的定位覆盖率。仿真分析表明:混合型算法有效地提高了信标节点密集环境下的定位精度和信标节点稀疏环境下的定位覆盖率。  相似文献   

10.
栾佳宁  张伟  孙伟  张奥  韩冬 《计算机应用》2021,41(5):1484-1491
为解决以蒙特卡罗定位算法为代表的激光室内定位算法存在的定位精度差和抗机器人绑架性能差的问题,以及传统二维码定位算法环境布置复杂且对机器人运行轨迹有严格要求的问题,提出了一种融合二维码视觉和激光雷达数据的移动机器人定位算法.机器人首先利用机器视觉技术搜索检测环境中的二维码,然后将检测出二维码的位姿分别转换至地图坐标系下,...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号