首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 423 毫秒
1.
陈理国  刘超 《软件学报》2014,25(6):1169-1179
在软件系统中,缺陷定位是缺陷修复的一个关键环节,如果能将缺陷自动定位到很小的范围,将会极大地降低缺陷修复的难度.基于高斯过程提出了一种缺陷定位方法(GPBL),即针对每个缺陷,向开发人员推荐这个缺陷可能存在于哪些源文件中,从而帮助开发人员快速修复缺陷.为了验证方法的有效性,采集了开源软件Eclipse 和Argouml 中的数据,实验结果表明,高斯过程缺陷定位的查全率和查准率平均分别为87.16%和78.90%.与基于LDA的缺陷定位方法进行比较,表明高斯过程更能准确定位缺陷的位置.  相似文献   

2.
SMT求解器作为重要的基础软件, 其存在的缺陷可能会导致依赖于它的软件功能失效, 甚至带来安全事故. 然而, 修复SMT求解器缺陷是一个十分耗时的任务, 因为开发者需要花费大量的时间和精力来理解并找到缺陷的根本原因. 虽然已有许多软件缺陷定位方面的研究, 但尚未有系统的工作研究如何自动定位SMT求解器缺陷. 因此, 提出一种基于多源频谱的SMT求解器缺陷定位方法SMTLOC. 首先, 对于给定的SMT求解器缺陷, SMTLOC提出一种枚举算法, 用以对触发该缺陷的公式进行变异, 从而生成一组不触发缺陷, 但与触发缺陷的公式具有相似执行路径的证人公式. 然后, SMTLOC根据证人公式的执行路径以及SMT求解器的源码信息, 提出一种融合覆盖频谱和历史频谱的文件可疑度计算方法, 从而定位可能存在缺陷的文件. 为了验证SMTLOC的有效性, 收集60个SMT求解器缺陷. 实验结果表明, SMTLOC的缺陷定位效果明显优于传统的频谱缺陷定位方法, SMTLOC可以将46.67%的缺陷定位在TOP-5的文件内, 定位效果提升了133.33%.  相似文献   

3.
We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository.  相似文献   

4.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

5.
ContextBug fixing is an integral part of software development and maintenance. A large number of bugs often indicate poor software quality, since buggy behavior not only causes failures that may be costly but also has a detrimental effect on the user’s overall experience with the software product. The impact of long lived bugs can be even more critical since experiencing the same bug version after version can be particularly frustrating for user. While there are many studies that investigate factors affecting bug fixing time for entire bug repositories, to the best of our knowledge, none of these studies investigates the extent and reasons of long lived bugs.ObjectiveIn this paper, we investigate the triaging and fixing processes of long lived bugs so that we can identify the reasons for delay and improve the overall bug fixing process.MethodologyWe mine the bug repositories of popular open source projects, and analyze long lived bugs from five different perspectives: their proportion, severity, assignment, reasons, as well as the nature of fixes.ResultsOur study on seven open-source projects shows that there are a considerable number of long lived bugs in each system and over 90% of them adversely affect the user’s experience. The reasons for these long lived bugs are diverse including long assignment time, not understanding their importance in advance, etc. However, many bug-fixes were delayed without any specific reasons. Furthermore, 40% of long lived bugs need only small fixes.ConclusionOur overall results suggest that a significant number of long lived bugs may be minimized through careful triaging and prioritization if developers could predict their severity, change effort, and change impact in advance. We believe our results will help both developers and researchers better to understand factors behind delays, improve the overall bug fixing process, and investigate analytical approaches for prioritizing bugs based on bug severity as well as expected bug fixing effort.  相似文献   

6.
缺陷定位是软件缺陷修复的关键步骤。随着计算机软件的日趋复杂和网络的迅速发展,如何快速高效的定位缺陷相关代码成为了一个急待解决的问题。在研究现有基于信息检索技术的缺陷定位方法的基础上,综合考虑缺陷修复历史信息,提出了基于缺陷修复历史的两阶段缺陷定位方法。该方法不再单一依赖文本相似度,从缺陷修复的局部性现象入手,更多的考虑了缺陷修复的历史记录、变更信息及代码特征等因素,结合信息检索和缺陷预测方法来提高缺陷定位的精度。最后本文以两个开源项目为例,验证了方法的可行性和有效性。  相似文献   

7.
解铮  黎铭 《软件学报》2017,28(11):3072-3079
在大型软件项目的开发与维护中,从大量的代码文件中定位软件缺陷费时、费力,有效地进行软件缺陷自动定位,将能极大地降低开发成本.软件缺陷报告通常包含了大量未发觉的软件缺陷的信息,精确地寻找与缺陷报告相关联的代码文件,对于降低维护成本具有重要意义.目前,已有一些基于深度神经网络的缺陷定位技术相对于传统方法,其效果有所提升,但相关工作大多关注网络结构的设计,缺乏对训练过程中损失函数的研究,而损失函数对于预测任务的性能会有极大的影响.在此背景下,提出了代价敏感的间隔分布优化(cost-sensitive margin distribution optimization,简称CSMDO)损失函数,并将代价敏感的间隔分布优化层应用到深度卷积神经网络中,能够良好地处理软件缺陷数据的不平衡性,进一步提高缺陷定位的准确度.  相似文献   

8.
ContextEffort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files.ObjectiveAlthough the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models?MethodWe perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort.ResultsWe find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%.ConclusionStudies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required.  相似文献   

9.
Bug fixing has a key role in software quality evaluation. Bug fixing starts with the bug localization step, in which developers use textual bug information to find location of source codes which have the bug. Bug localization is a tedious and time consuming process. Information retrieval requires understanding the programme's goal, coding structure, programming logic and the relevant attributes of bug. Information retrieval (IR) based bug localization is a retrieval task, where bug reports and source files represent the queries and documents, respectively. In this paper, we propose BugCatcher, a newly developed bug localization method based on multi‐level re‐ranking IR technique. We evaluate BugCatcher on three open source projects with approximately 3400 bugs. Our experiments show that multi‐level reranking approach to bug localization is promising. Retrieval performance and accuracy of BugCatcher are better than current bug localization tools, and BugCatcher has the best Top N, Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) values for all datasets.  相似文献   

10.
To support program comprehension, software artifacts can be labeled—for example within software visualization tools—with a set of representative words, hereby referred to as labels. Such labels can be obtained using various approaches, including Information Retrieval (IR) methods or other simple heuristics. They provide a bird-eye’s view of the source code, allowing developers to look over software components fast and make more informed decisions on which parts of the source code they need to analyze in detail. However, few empirical studies have been conducted to verify whether the extracted labels make sense to software developers. This paper investigates (i) to what extent various IR techniques and other simple heuristics overlap with (and differ from) labeling performed by humans; (ii) what kinds of source code terms do humans use when labeling software artifacts; and (iii) what factors—in particular what characteristics of the artifacts to be labeled—influence the performance of automatic labeling techniques. We conducted two experiments in which we asked a group of students (38 in total) to label 20 classes from two Java software systems, JHotDraw and eXVantage. Then, we analyzed to what extent the words identified with an automated technique—including Vector Space Models, Latent Semantic Indexing (LSI), latent Dirichlet allocation (LDA), as well as customized heuristics extracting words from specific source code elements—overlap with those identified by humans. Results indicate that, in most cases, simpler automatic labeling techniques—based on the use of words extracted from class and method names as well as from class comments—better reflect human-based labeling. Indeed, clustering-based approaches (LSI and LDA) are more worthwhile to be used for source code artifacts having a high verbosity, as well as for artifacts requiring more effort to be manually labeled. The obtained results help to define guidelines on how to build effective automatic labeling techniques, and provide some insights on the actual usefulness of automatic labeling techniques during program comprehension tasks.  相似文献   

11.
Information Retrieval (IR) approaches, such as Latent Semantic Indexing (LSI) and Vector Space Model (VSM), are commonly applied to recover software traceability links. Recently, an approach based on developers’ eye gazes was proposed to retrieve traceability links. This paper presents a comparative study on IR and eye-gaze based approaches. In addition, it reports on the possibility of using eye gaze links as an alternative benchmark in comparison to commits. The study conducted asked developers to perform bug-localization tasks on the open source subject system JabRef. The iTrace environment, which is an eye tracking enabled Eclipse plugin, was used to collect eye gaze data. During the data collection phase, an eye tracker was used to gather the source code entities (SCE’s), developers looked at while solving these tasks. We present an algorithm that uses the collected gaze dataset to produce candidate traceability links related to the tasks. In the evaluation phase, we compared the results of our algorithm with the results of an IR technique, in two different contexts. In the first context, precision and recall metric values are reported for both IR and eye gaze approaches based on commits. In the second context, another set of developers were asked to rate the candidate links from each of the two techniques in terms of how useful they were in fixing the bugs. The eye gaze approach outperforms standard LSI and VSM approaches and reports a 55 % precision and 67 % recall on average for all tasks when compared to how the developers actually fixed the bug. In the second context, the usefulness results show that links generated by our algorithm were considered to be significantly more useful (to fix the bug) than those of the IR technique in a majority of tasks. We discuss the implications of this radically different method of deriving traceability links. Techniques for feature location/bug localization are commonly evaluated on benchmarks formed from commits as is done in the evaluation phase of this study. Although, commits are a reasonable source, they only capture entities that were eventually changed to fix a bug or resolve a feature. We investigate another type of benchmark based on eye tracking data, namely links generated from the bug-localization tasks given to the developers in the data collection phase. The source code entities relevant to subjected bugs recommended from IR methods are evaluated on both commits and links generated from eye gaze. The results of the benchmarking phase show that the use of eye tracking could form an effective (complementary) benchmark and add another interesting perspective in the evaluation of bug-localization techniques.  相似文献   

12.
CP-Miner: finding copy-paste and related bugs in large-scale software code   总被引:2,自引:0,他引:2  
Recent studies have shown that large software suites contain significant amounts of replicated code. It is assumed that some of this replication is due to copy-and-paste activity and that a significant proportion of bugs in operating systems are due to copy-paste errors. Existing static code analyzers are either not scalable to large software suites or do not perform robustly where replicated code is modified with insertions and deletions. Furthermore, the existing tools do not detect copy-paste related bugs. In this paper, we propose a tool, CP-Miner, that uses data mining techniques to efficiently identify copy-pasted code in large software suites and detects copy-paste bugs. Specifically, it takes less than 20 minutes for CP-Miner to identify 190,000 copy-pasted segments in Linux and 150,000 in FreeBSD. Moreover, CP-Miner has detected many new bugs in popular operating systems, 49 in Linux and 31 in FreeBSD, most of which have since been confirmed by the corresponding developers and have been rectified in the following releases. In addition, we have found some interesting characteristics of copy-paste in operating system code. Specifically, we analyze the distribution of copy-pasted code by size (number lines of code), granularity (basic blocks and functions), and modification within copy-pasted code. We also analyze copy-paste across different modules and various software versions.  相似文献   

13.
Model checking and static analysis are traditionally seen as two separate approaches to software analysis and verification. In this work we define a model, checking approach for the static analysis of large C/C++ source code bases to detect potential run-time issues such as program crashes, security vulnerabilities and memory leaks. Working on the intersection of software model checking and automated static bug detection for real-life systems, we address a number of issues: how to scale for real-life systems of 1,000,000 LoC or more, how to quickly write new checks, and most importantly how to distinguish between relevant and irrelevant bugs and fine tune the analysis accordingly. We define our model checking-based static analysis approach implemented in our tool Goanna, illustrate a number of design and implementation decisions to obtain practical outcomes and relevant results, and present our findings by empirical data obtained from regularly analyzing large industrial and open source code bases such as the Firefox Web browser.  相似文献   

14.
为了降低缺陷定位过程中的人力成本,研究者们在缺陷报告的基础上提出了许多基于信息检索的缺陷定位模型,包括使用传统特征和使用深度学习特征进行建模的定位模型.在评价不同缺陷定位模型时设计的实验中,现有研究大多忽视了缺陷报告所属的版本与目标源代码的版本之间存在的“版本失配”问题或/和在训练和测试模型时缺陷报告的时间顺序所引发的“数据泄露”问题.致力于报告现有模型在更加真实的应用场景下的性能表现,并分析版本失配和数据泄露问题对评估各模型真实性能产生的影响.选取6个使用传统特征的定位模型(BugLocator、BRTracer、BLUiR、AmaLgam、BLIA、Locus)和1个使用深度学习特征的定位模型(CodeBERT)作为研究对象.在5个不同实验设置下基于8个开源项目进行系统性的实证分析.首先, CodeBERT模型直接应用于缺陷定位效果并不理想,其定位的准确率依赖于目标项目的版本数目和源代码规模.其次,版本匹配设置下使用传统特征的定位模型在平均准确率均值(MAP)、平均序位倒数均值(MRR)两个指标上比版本失配实验设置下最高可以提高47.2%和46.0%, CodeBERT模型的效果也...  相似文献   

15.
ContextBug report assignment, namely, to assign new bug reports to developers for timely and effective bug resolution, is crucial for software quality assurance. However, with the increasing size of software system, it is difficult to assign bugs to appropriate developers for bug managers.ObjectiveThis paper propose an approach, called KSAP (K-nearest-neighbor search and heterogeneous proximity), to improve automatic bug report assignment by using historical bug reports and heterogeneous network of bug repository.MethodWhen a new bug report was submitted to the bug repository, KSAP assigns developers for the bug report by using a two-phase procedure. The first phase is to search historically-resolved similar bug reports to the new bug report by K-nearest-neighbor (KNN) method. The second phase is to rank the developers who contributed to those similar bug reports by heterogeneous proximity.ResultsWe collected bug repositories of Mozilla, Eclipse, Apache Ant and Apache Tomcat6 projects to investigate the performance of the proposed KSAP approach. Experimental results demonstrate that KSAP can improve the recall of bug report assignment between 7.5–32.25% in comparison with the state of art techniques. When there is only a small number of developer collaborations on common bug reports, KSAP has shown its excellence over other sate of art techniques. When we tune the parameters of the number of historically-resolved similar bug reports (K) and the number of developers (Q) for recommendation, KSAP keeps its superiority steadily.ConclusionThis is the first paper to demonstrate how to automatically build heterogeneous network of a bug repository and extract meta-paths of developer collaborations from the heterogeneous network for bug report assignment.  相似文献   

16.
Debugging is crucial for producing reliable software. One of the effective bug localization techniques is spectral‐based fault localization (SBFL). It helps to locate a buggy statement by applying an evaluation metric to program spectra and ranking program components on the basis of the score it computes. SBFL is an example of a dynamic analysis – an analysis of computer program that is performed by executing it with sufficient number of test cases. Static analysis, on the other hand, is performed in a non‐runtime environment. We introduce a weighting technique by combining these two kinds of program analysis. Static analysis is performed to categorize program statements into different classes and giving them weights based on the likelihood of being buggy statement. Statements are finally ranked on the basis of the weights computed by statements' categorization (static analysis) and scores computed by SBFL metrics (dynamic analysis). We evaluate the performance of our technique on Siemens test suite and Flex (having seeded bugs seeded by expert developers), Sed (having mixture of real and seeded bugs), and Space (having real bugs). In our evaluation, proposed weighting technique improves the performance of a wide variety of fault localization metrics up to 20% on single bug datasets and up to 42% on multi‐bug datasets. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
席圣渠  姚远  徐锋  吕建 《软件学报》2018,29(8):2322-2335
随着开源软件项目规模的不断增大,人工为缺陷报告分派合适的开发人员(缺陷分派)变得越来越困难.而不合适的缺陷分派往往会严重影响缺陷修复的效率,为此迫切需要一种缺陷分派辅助技术帮助项目管理者更好地完成缺陷分派任务.当前,大部分研究工作都基于缺陷报告文本以及相关元数据信息分析来刻画开发者的特征,忽略了对开发者活跃度的考虑,使得对具有相似特征的开发者进行缺陷报告分派预测时表现较差.本文提出了一个基于循环神经网络的深度学习模型DeepTriage,一方面利用双向循环网络加池化方法提取缺陷报告的文本特征,一方面利用单向循环网络提取特定时刻的开发者活跃度特征,并融合两者,利用已修复的缺陷报告进行监督学习.在Eclipse等四个不同的开源项目数据集上的实验结果表明,DeepTriage较同类工作在缺陷分派预测准确率上有显著提升.  相似文献   

18.
王燕  吴化尧  聂长海  徐家喜  尹震  钮鑫涛 《软件学报》2022,33(11):3983-4007
缺陷追踪是软件项目管理的一个重要环节,是保证现代大规模开源软件开发顺利进行并持续提高软件质量的必要手段.目前,大部分开源软件都使用开放的缺陷跟踪系统进行软件缺陷的管理.它允许用户向开发者提交系统故障(即defect类型缺陷)以及系统改进建议(即enhancement类型缺陷),但是这些用户的反馈所起的作用尚未得到充分研究.针对这一问题,对Firefox的缺陷跟踪系统进行实证研究,收集了2018年和2019年提交的19 474份Firefox Desktop以及3 057份Firefox for Android缺陷报告.在此基础上,对比分析了普通用户和核心开发者提交的缺陷在数量、严重性、组件分布、修复率、修复速度以及修复者上的差别,并调查了缺陷报告的撰写质量与缺陷处理结果和修复时间的关系.主要发现包括:(1)当前缺陷追踪系统中普通用户人数众多,但参与程度较浅,86%的用户只提交过一个缺陷,其中,高严重等级的缺陷不超过3%;(2)普通用户提交的缺陷主要分布在和用户交互相关的UI组件上(例如地址栏、音频/视频等),然而还有43%的缺陷由于缺乏充分描述信息而难以准确地定位到具体的关联组件;(3)在缺陷处理结果上,由于查重系统以及缺陷填报系统在设计上过于简单,致使普通用户提交的大量缺陷被处理为“无用”缺陷,缺陷修复率低于10%;(4)在缺陷修复流程上,由于普通用户难以准确、充分地描述缺陷,导致系统对其重视程度不足,普通用户提交缺陷的处理流程也比核心开发者提交的复杂,平均需要多花至少8天的时间进行修复.上述研究结果揭示了当前缺陷追踪系统在用户参与激励机制、缺陷自动查重以及缺陷报告填写智能辅助等方面的不足,能够为缺陷跟踪系统开发者和管理者改进系统、提高普通用户对开源软件的贡献提供参考.  相似文献   

19.
ContextBlocking bugs are bugs that prevent other bugs from being fixed. Previous studies show that blocking bugs take approximately two to three times longer to be fixed compared to non-blocking bugs.ObjectiveThus, automatically predicting blocking bugs early on so that developers are aware of them, can help reduce the impact of or avoid blocking bugs. However, a major challenge when predicting blocking bugs is that only a small proportion of bugs are blocking bugs, i.e., there is an unequal distribution between blocking and non-blocking bugs. For example, in Eclipse and OpenOffice, only 2.8% and 3.0% bugs are blocking bugs, respectively. We refer to this as the class imbalance phenomenon.MethodIn this paper, we propose ELBlocker to identify blocking bugs given a training data. ELBlocker first randomly divides the training data into multiple disjoint sets, and for each disjoint set, it builds a classifier. Next, it combines these multiple classifiers, and automatically determines an appropriate imbalance decision boundary to differentiate blocking bugs from non-blocking bugs. With the imbalance decision boundary, a bug report will be classified to be a blocking bug when its likelihood score is larger than the decision boundary, even if its likelihood score is low.ResultsTo examine the benefits of ELBlocker, we perform experiments on 6 large open source projects – namely Freedesktop, Chromium, Mozilla, Netbeans, OpenOffice, and Eclipse containing a total of 402,962 bugs. We find that ELBlocker achieves F1 and EffectivenessRatio@20% scores of up to 0.482 and 0.831, respectively. On average across the 6 projects, ELBlocker improves the F1 and EffectivenessRatio@20% scores over the state-of-the-art method proposed by Garcia and Shihab by 14.69% and 8.99%, respectively. Statistical tests show that the improvements are significant and the effect sizes are large.ConclusionELBlocker can help deal with the class imbalance phenomenon and improve the prediction of blocking bugs. ELBlocker achieves a substantial and statistically significant improvement over the state-of-the-art methods, i.e., Garcia and Shihab’s method, SMOTE, OSS, and Bagging.  相似文献   

20.
ContextSecurity vulnerabilities discovered later in the development cycle are more expensive to fix than those discovered early. Therefore, software developers should strive to discover vulnerabilities as early as possible. Unfortunately, the large size of code bases and lack of developer expertise can make discovering software vulnerabilities difficult. A number of vulnerability discovery techniques are available, each with their own strengths.ObjectiveThe objective of this research is to aid in the selection of vulnerability discovery techniques by comparing the vulnerabilities detected by each and comparing their efficiencies.MethodWe conducted three case studies using three electronic health record systems to compare four vulnerability discovery techniques: exploratory manual penetration testing, systematic manual penetration testing, automated penetration testing, and automated static analysis.ResultsIn our case study, we found empirical evidence that no single technique discovered every type of vulnerability. We discovered that the specific set of vulnerabilities identified by one tool was largely orthogonal to that of other tools. Systematic manual penetration testing found the most design flaws, while automated static analysis found the most implementation bugs. The most efficient discovery technique in terms of vulnerabilities discovered per hour was automated penetration testing.ConclusionThe results show that employing a single technique for vulnerability discovery is insufficient for finding all types of vulnerabilities. Each technique identified only a subset of the vulnerabilities, which, for the most part were independent of each other. Our results suggest that in order to discover the greatest variety of vulnerability types, at least systematic manual penetration testing and automated static analysis should be performed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号