首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 104 毫秒
1.
Toward an understanding of bug fix patterns   总被引:1,自引:1,他引:0  
Twenty-seven automatically extractable bug fix patterns are defined using the syntax components and context of the source code involved in bug fix changes. Bug fix patterns are extracted from the configuration management repositories of seven open source projects, all written in Java (Eclipse, Columba, JEdit, Scarab, ArgoUML, Lucene, and MegaMek). Defined bug fix patterns cover 45.7% to 63.3% of the total bug fix hunk pairs in these projects. The frequency of occurrence of each bug fix pattern is computed across all projects. The most common individual patterns are MC-DAP (method call with different actual parameter values) at 14.9–25.5%, IF-CC (change in if conditional) at 5.6–18.6%, and AS-CE (change of assignment expression) at 6.0–14.2%. A correlation analysis on the extracted pattern instances on the seven projects shows that six have very similar bug fix pattern frequencies. Analysis of if conditional bug fix sub-patterns shows a trend towards increasing conditional complexity in if conditional fixes. Analysis of five developers in the Eclipse projects shows overall consistency with project-level bug fix pattern frequencies, as well as distinct variations among developers in their rates of producing various bug patterns. Overall, data in the paper suggest that developers have difficulty with specific code situations at surprisingly consistent rates. There appear to be broad mechanisms causing the injection of bugs that are largely independent of the type of software being produced.
E. James Whitehead Jr.Email:
  相似文献   

2.
When interacting with source control management system, developers often commit unrelated or loosely related code changes in a single transaction. When analyzing version histories, such tangled changes will make all changes to all modules appear related, possibly compromising the resulting analyses through noise and bias. In an investigation of five open-source Java projects, we found between 7 % and 20 % of all bug fixes to consist of multiple tangled changes. Using a multi-predictor approach to untangle changes, we show that on average at least 16.6 % of all source files are incorrectly associated with bug reports. These incorrect bug file associations seem to not significantly impact models classifying source files to have at least one bug or no bugs. But our experiments show that untangling tangled code changes can result in more accurate regression bug prediction models when compared to models trained and tested on tangled bug datasets—in our experiments, the statistically significant accuracy improvements lies between 5 % and 200 %. We recommend better change organization to limit the impact of tangled changes.  相似文献   

3.
The goal of regression testing is to ensure that the behaviour of existing code, believed correct by previous testing, is not altered by new program changes. This paper argues that the primary focus of regression testing should be on code associated with (1) earlier bug fixes and (2) particular application scenarios considered to be important by the developer or tester. Existing coverage criteria do not enable such focus, for example, 100% branch coverage does not guarantee that a given bug fix is exercised or a given application scenario is tested. Therefore, there is a need for a new and complementary coverage criterion in which the user can definea test requirement characterizing a given behaviour to be covered as opposed to choosing from a pool of pre‐defined and generic program elements. This paper proposes this new methodology and calls it UCov, a user‐defined coverage criterion wherein a test requirement is an execution pattern of program elements, and possibly predicates, that a test case must satisfy. The proposed criterion is not meant to replace existing criteria, but to complement them as it focuses the testing on important code patterns that could go untested otherwise. UCov supports test case intent verification. For example, following a bug fix, the testing team may augment the regression suite with the test case that revealed the bug. However, this test case might become obsolete due to code modifications not related to the bug. But if a test requirement characterizing the bug was defined by the user, UCov would determine that test case intent verification failed. The UCov methodology was implemented for the Java platform, was successfully applied onto 10 real‐life case studies and was shown to have advantages over JUnit. The implementation comprises the following tools: (1) TRSpec: allows the user to easily specify complex test requirements; (2) TRCheck: checks whether user‐defined test requirements were satisfied, that is, supports test case intent verification; and (3) TRMigrate: migrates user‐defined test requirements to subsequent versions of a given program. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

5.
Information Retrieval (IR) approaches, such as Latent Semantic Indexing (LSI) and Vector Space Model (VSM), are commonly applied to recover software traceability links. Recently, an approach based on developers’ eye gazes was proposed to retrieve traceability links. This paper presents a comparative study on IR and eye-gaze based approaches. In addition, it reports on the possibility of using eye gaze links as an alternative benchmark in comparison to commits. The study conducted asked developers to perform bug-localization tasks on the open source subject system JabRef. The iTrace environment, which is an eye tracking enabled Eclipse plugin, was used to collect eye gaze data. During the data collection phase, an eye tracker was used to gather the source code entities (SCE’s), developers looked at while solving these tasks. We present an algorithm that uses the collected gaze dataset to produce candidate traceability links related to the tasks. In the evaluation phase, we compared the results of our algorithm with the results of an IR technique, in two different contexts. In the first context, precision and recall metric values are reported for both IR and eye gaze approaches based on commits. In the second context, another set of developers were asked to rate the candidate links from each of the two techniques in terms of how useful they were in fixing the bugs. The eye gaze approach outperforms standard LSI and VSM approaches and reports a 55 % precision and 67 % recall on average for all tasks when compared to how the developers actually fixed the bug. In the second context, the usefulness results show that links generated by our algorithm were considered to be significantly more useful (to fix the bug) than those of the IR technique in a majority of tasks. We discuss the implications of this radically different method of deriving traceability links. Techniques for feature location/bug localization are commonly evaluated on benchmarks formed from commits as is done in the evaluation phase of this study. Although, commits are a reasonable source, they only capture entities that were eventually changed to fix a bug or resolve a feature. We investigate another type of benchmark based on eye tracking data, namely links generated from the bug-localization tasks given to the developers in the data collection phase. The source code entities relevant to subjected bugs recommended from IR methods are evaluated on both commits and links generated from eye gaze. The results of the benchmarking phase show that the use of eye tracking could form an effective (complementary) benchmark and add another interesting perspective in the evaluation of bug-localization techniques.  相似文献   

6.
An approach is presented that automatically determines if a given source code change impacts the design (i.e., UML class diagram) of the system. This allows code-to-design traceability to be consistently maintained as the source code evolves. The approach uses lightweight analysis and syntactic differencing of the source code changes to determine if the change alters the class diagram in the context of abstract design. The intent is to support both the simultaneous updating of design documents with code changes and bringing old design documents up to date with current code given the change history. An efficient tool was developed to support the approach and is applied to an open source system. The results are evaluated and compared against manual inspection by human experts. The tool performs better than (error prone) manual inspection. The developed approach and tool were used to empirically investigate and understand how changes to source code (i.e., commits) break code-to-design traceability during evolution and the benefits from such understanding. Commits are categorized as design impact or no impact. The commits of four open source projects over 3-year time durations are extracted and analyzed. The results of the study show that most of the code changes do not impact the design and these commits have a smaller number of changed files and changed less lines compared to commits with design impact. The results also show that most bug fixes do not impact design.  相似文献   

7.
8.
相较于其他类型的漏洞,安全性漏洞更容易发生再修复,这使得安全性漏洞需要更多的开发资源,从而增加了这些安全性漏洞修复的成本。因此,减少安全性漏洞再修复的发生的重要性不言而喻。对安全性漏洞再修复的经验研究有助于减少再修复的发生。首先,通过对Mozilla工程中一些发生再修复的安全性漏洞的安全性漏洞类型、发生再修复的原因、再修复的次数、修改的提交数、修改的文件数、修改的代码行数的增减、初始修复和再修复的对比等数据进行分析,发现了安全性漏洞发生再修复是普遍存在的,且与漏洞发生原因的识别的复杂程度和漏洞修复的复杂程度这两个因素有关;其次,初始修复涉及的文件、代码的集中程度是影响再修复的原因之一,而使用更复杂、更有效的修复过程可有效避免再修复的发生;最后,总结了几种安全性漏洞发生再修复的原因,使开发人员有效地识别不同类型的安全性漏洞再修复。  相似文献   

9.
Machine learning (ML) techniques and algorithms have been successfully and widely used in various areas including software engineering tasks. Like other software projects, bugs are also common in ML projects and libraries. In order to more deeply understand the features related to bug fixing in ML projects, we conduct an empirical study with 939 bugs from five ML projects by manually examining the bug categories, fixing patterns, fixing scale, fixing duration, and types of maintenance. The results show that (1) there are commonly seven types of bugs in ML programs; (2) twelve fixing patterns are typically used to fix the bugs in ML programs; (3) 68.80% of the patches belong to micro-scale-fix and small-scale-fix; (4) 66.77% of the bugs in ML programs can be fixed within one month; (5) 45.90% of the bug fixes belong to corrective activity from the perspective of software maintenance. Moreover, we perform a questionnaire survey and send them to developers or users of ML projects to validate the results in our empirical study. The results of our empirical study are basically consistent with the feedback from developers. The findings from the empirical study provide useful guidance and insights for developers and users to effectively detect and fix bugs in MLprojects.  相似文献   

10.
It is said 90% of faults that survive manufacturer’s testing procedures are complex. That is, the corresponding bug fix contains multiple changes. Higher order mutation testing is used to study defect interactions and their impact on software testing for fault finding. We adopt a multi-objective Pareto optimal approach using Monte Carlo sampling, genetic algorithms and genetic programming to search for higher order mutants which are both hard-to-kill and realistic. The space of complex faults (higher order mutants) is much larger than that of traditional first order mutations which correspond to simple faults, nevertheless search based approaches make this scalable. The problems of non-determinism and efficiency are overcome. Easy to detect faults may become harder to detect when they interact and impossible to detect single faults may be brought to light when code contains two such faults. We use strong typing and BNF grammars in search based mutation testing to find examples of both in ancient heavily optimised every day C code.  相似文献   

11.
System software typically offers a large amount of compile-time options and variability. A good example is the Linux kernel, which provides more than 10,000 configurable features, growing rapidly. This allows users to tailor it with respect to a broad range of supported hardware architectures and application domains. From the maintenance point of view, compile-time configurability poses big challenges. The configuration model (the selectable features and their constraints as presented to the user) and the configurability that is actually implemented in the code have to be kept in sync, which, if performed manually, is a tedious and error-prone task. In the case of Linux, this has led to numerous defects in the source code, many of which are actual bugs. In order to ensure consistency between the variability expressed in the code and the configuration models, we propose an approach that extracts variability from both into propositional logic. This reveals inconsistencies between variability as expressed by the C Preprocessor (CPP) and an explicit variability model, which manifest themselves in seemingly conditional code that is in fact unconditional. We evaluate our approach with the Linux, for which our tool detects 1,766 configurability defects, which turned out as dead/superfluous source code and bugs. Our findings have led to numerous source-code improvements and bug fixes in Linux: 123 patches (49 merged) fix 364 defects, 147 of which have been confirmed by the corresponding Linux developers and 20 as fixing a previously unknown bug.  相似文献   

12.
Developers occasionally make more than one patch to fix a bug. The related patches sometimes are intentionally separated, but unintended omission errors require supplementary patches. Several change recommendation systems have been suggested based on clone analysis, structural dependency, and historical change coupling in order to reduce or prevent incomplete patches. However, very few studies have examined the reason that incomplete patches occur and how real-world omission errors could be reduced. This paper systematically studies a group of bugs that were fixed more than once in open source projects in order to understand the characteristics of incomplete patches. Our study on Eclipse JDT core, Eclipse SWT, Mozilla, and Equinox p2 showed that a significant portion of the resolved bugs require more than one attempt to fix. Compared to single-fix bugs, the multi-fix bugs did not have a lower quality of bug reports, but more attribute changes (i.e., cc’ed developers or title) were made to the multi-fix bugs than to the single-fix bugs. Multi-fix bugs are more likely to have high severity levels than single-fix bugs. Hence, more developers have participated in discussions about multi-fix bugs compared to single-fix bugs. Multi-fix bugs take more time to resolve than single-fix bugs do. Incomplete patches are longer and more scattered, and they are related to more files than regular patches are. Our manual inspection showed that the causes of incomplete patches were diverse, including missed porting updates, incorrect handling of conditional statements, and incomplete refactoring. Our investigation showed that only 7 % to 17 % of supplementary patches had content similar to their initial patches, which implies that supplementary patch locations cannot be predicted by code clone analysis alone. Furthermore, 16 % to 46 % of supplementary patches were beyond the scope of the immediate structural dependency of their initial patch locations. Historical co-change patterns also showed low precision in predicting supplementary patch locations. Code clones, structural dependencies, and historical co-change analyses predicted different supplementary patch locations, and there was little overlap between them. Combining these analyses did not cover all supplementary patch locations. The present study investigates the characteristics of incomplete patches and multi-fix bugs, which have not been systematically examined in previous research. We reveal that predicting supplementary patch is a difficult problem that existing change recommendation approaches could not cover. New type of approaches should be developed and validated on a supplementary patch data set, which developers failed to make the complete patches at once in practice.  相似文献   

13.
在软件开发和维护过程中,缺陷修复人员通常根据由终端用户或者开发/测试者提交的缺陷报告来定位和修复缺陷.因此,缺陷报告本身的质量对修复人员能否快速准确定位并修复缺陷具有重要的作用.围绕缺陷报告质量的刻画及改进,研究人员开展了大量的研究工作,但尚未进行系统性的归纳.旨在对这些工作进行系统性地梳理,展示该领域的研究现状并为未来的研究方向提供参考意见.首先,总结了已有缺陷报告存在的质量问题,如关键信息缺失、信息错误等;接着,总结了对缺陷报告质量进行自动化建模的技术;然后,描述了一系列对缺陷报告质量进行改进的方法;最后,对未来研究可能面临的挑战和机遇进行了展望.  相似文献   

14.
New methodologies and tools have gradually made the life cycle for software development more human-independent. Much of the research in this field focuses on defect reduction, defect identification and defect prediction. Defect prediction is a relatively new research area that involves using various methods from artificial intelligence to data mining. Identifying and locating defects in software projects is a difficult task. Measuring software in a continuous and disciplined manner provides many advantages such as the accurate estimation of project costs and schedules as well as improving product and process qualities. This study aims to propose a model to predict the number of defects in the new version of a software product with respect to the previous stable version. The new version may contain changes related to a new feature or a modification in the algorithm or bug fixes. Our proposed model aims to predict the new defects introduced into the new version by analyzing the types of changes in an objective and formal manner as well as considering the lines of code (LOC) change. Defect predictors are helpful tools for both project managers and developers. Accurate predictors may help reducing test times and guide developers towards implementing higher quality codes. Our proposed model can aid software engineers in determining the stability of software before it goes on production. Furthermore, such a model may provide useful insight for understanding the effects of a feature, bug fix or change in the process of defect detection.
Ayşe Basar BenerEmail:
  相似文献   

15.
NetBsd是Unix的变种,作为控制部分的操作系统广泛应用于网络设备中,该文报告在使用过程中发现的网络部分的缺陷及其修正方法。  相似文献   

16.
ContextThe component field in a bug report provides important location information required by developers during bug fixes. Research has shown that incorrect component assignment for a bug report often causes problems and delays in bug fixes. A topic model technique, Latent Dirichlet Allocation (LDA), has been developed to create a component recommender for bug reports.ObjectiveWe seek to investigate a better way to use topic modeling in creating a component recommender.MethodThis paper presents a component recommender by using the proposed Discriminative Probability Latent Semantic Analysis (DPLSA) model and Jensen–Shannon divergence (DPLSA-JS). The proposed DPLSA model provides a novel method to initialize the word distributions for different topics. It uses the past assigned bug reports from the same component in the model training step. This results in a correlation between the learned topics and the components.ResultsWe evaluate the proposed approach over five open source projects, Mylyn, Gcc, Platform, Bugzilla and Firefox. The results show that the proposed approach on average outperforms the LDA-KL method by 30.08%, 19.60% and 14.13% for recall @1, recall @3 and recall @5, outperforms the LDA-SVM method by 31.56%, 17.80% and 8.78% for recall @1, recall @3 and recall @5, respectively.ConclusionOur method discovers that using comments in the DPLSA-JS recommender does not always make a contribution to the performance. The vocabulary size does matter in DPLSA-JS. Different projects need to adaptively set the vocabulary size according to an experimental method. In addition, the correspondence between the learned topics and components in DPLSA increases the discriminative power of the topics which is useful for the recommendation task.  相似文献   

17.
18.
Model transformations are central components of most model-based software projects. While ensuring their correctness is vital to guarantee the quality of the solution, current transformation tools provide limited support to statically detect and fix errors. In this way, the identification of errors and their correction are nowadays mostly manual activities which incur in high costs. The aim of this work is to improve this situation. Recently, we developed a static analyser that combines program analysis and constraint solving to identify errors in ATL model transformations. In this paper, we present a novel method and system that uses our analyser to propose suitable quick fixes for ATL transformation errors, notably some non-trivial, transformation-specific ones. Our approach supports speculative analysis to help developers select the most appropriate fix by creating a dynamic ranking of fixes, reporting on the consequences of applying a quick fix, and providing a pre-visualization of each quick fix application. The approach integrates seamlessly with the ATL editor. Moreover, we provide an evaluation based on existing faulty transformations built by a third party, and on automatically generated transformation mutants, which are then corrected with the quick fixes of our catalogue.  相似文献   

19.
缺陷定位是软件缺陷修复的关键步骤。随着计算机软件的日趋复杂和网络的迅速发展,如何快速高效的定位缺陷相关代码成为了一个急待解决的问题。在研究现有基于信息检索技术的缺陷定位方法的基础上,综合考虑缺陷修复历史信息,提出了基于缺陷修复历史的两阶段缺陷定位方法。该方法不再单一依赖文本相似度,从缺陷修复的局部性现象入手,更多的考虑了缺陷修复的历史记录、变更信息及代码特征等因素,结合信息检索和缺陷预测方法来提高缺陷定位的精度。最后本文以两个开源项目为例,验证了方法的可行性和有效性。  相似文献   

20.
ContextBug fixing is an integral part of software development and maintenance. A large number of bugs often indicate poor software quality, since buggy behavior not only causes failures that may be costly but also has a detrimental effect on the user’s overall experience with the software product. The impact of long lived bugs can be even more critical since experiencing the same bug version after version can be particularly frustrating for user. While there are many studies that investigate factors affecting bug fixing time for entire bug repositories, to the best of our knowledge, none of these studies investigates the extent and reasons of long lived bugs.ObjectiveIn this paper, we investigate the triaging and fixing processes of long lived bugs so that we can identify the reasons for delay and improve the overall bug fixing process.MethodologyWe mine the bug repositories of popular open source projects, and analyze long lived bugs from five different perspectives: their proportion, severity, assignment, reasons, as well as the nature of fixes.ResultsOur study on seven open-source projects shows that there are a considerable number of long lived bugs in each system and over 90% of them adversely affect the user’s experience. The reasons for these long lived bugs are diverse including long assignment time, not understanding their importance in advance, etc. However, many bug-fixes were delayed without any specific reasons. Furthermore, 40% of long lived bugs need only small fixes.ConclusionOur overall results suggest that a significant number of long lived bugs may be minimized through careful triaging and prioritization if developers could predict their severity, change effort, and change impact in advance. We believe our results will help both developers and researchers better to understand factors behind delays, improve the overall bug fixing process, and investigate analytical approaches for prioritizing bugs based on bug severity as well as expected bug fixing effort.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号