首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
陈理国  刘超 《软件学报》2014,25(6):1169-1179
在软件系统中,缺陷定位是缺陷修复的一个关键环节,如果能将缺陷自动定位到很小的范围,将会极大地降低缺陷修复的难度.基于高斯过程提出了一种缺陷定位方法(GPBL),即针对每个缺陷,向开发人员推荐这个缺陷可能存在于哪些源文件中,从而帮助开发人员快速修复缺陷.为了验证方法的有效性,采集了开源软件Eclipse 和Argouml 中的数据,实验结果表明,高斯过程缺陷定位的查全率和查准率平均分别为87.16%和78.90%.与基于LDA的缺陷定位方法进行比较,表明高斯过程更能准确定位缺陷的位置.  相似文献   

2.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   

3.
自动分析软件缺陷报告间相关性的方法研究*   总被引:2,自引:1,他引:1  
针对缺陷报告相关性分析的研究主要采用计算其文本信息相似度的方法使其查全率和查准率并不理想,提出了一种将结构化信息相似度与文本信息相似度计算相结合的方法,即同时提取出缺陷报告中的文本信息(包括主题和详细描述)以及结构化信息(包括补丁、异常堆栈和代码片段),从缺陷外部表现和内部特征两个角度共同衡量缺陷报告间的相关性。通过对Eclipse系统中的1 000个缺陷报告进行实验,结果显示,增加结构化信息相似度计算,可以有效地将缺陷报告间相关性分析的查准率和查全率均提高到90%左右。  相似文献   

4.
Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports data that is stored in software repositories. However, all prior efforts assume that such human-assigned data is reliable. Hence a perfect automated approach should be able to assign the same severity label as in the repository – achieving a 100% accuracy. Looking at duplicate bug reports (i.e., reports referring to the same problem) from three open-source software systems (OpenOffice, Mozilla, and Eclipse), we find that around 51 % of the duplicate bug reports have inconsistent human-assigned severity labels even though they refer to the same software problem. While our results do indicate that duplicate bug reports have unreliable severity labels, we believe that they send warning signals about the reliability of the full bug severity data (i.e., including non-duplicate reports). Future research efforts should explore if our findings generalize to the full dataset. Moreover, they should factor in the unreliable nature of the bug severity data. Given the unreliable nature of the severity data, classical metrics to assess the accuracy of models/learners should not be used for assessing the accuracy of approaches for automated assigning severity label. Hence, we propose a new approach to assess the performance of such models. Our new assessment approach shows that current automated approaches perform well – 77-86 % agreement with human-assigned severity labels.  相似文献   

5.
重复缺陷报告的自动化检测可以减少开发冗余和维护成本,最近重复缺陷报告的检测倾向于利用深度神经网络,并考虑结构化和非结构化信息来生成混合表示特征。为了更有效获得缺陷报告的非结构化信息的特征,提出一种D_BBAS(Doc2vec and BERT BiLSTM-attention similarity)方法,它基于大规模缺陷报告库训练特征提取模型,生成能反映深层次语义信息的缺陷摘要文本表示集和缺陷描述文本表示集;利用这两个分布式的表示集计算出缺陷报告对的相似度,从而得到两个新的相似度特征;这两个新特征将与基于结构化信息生成的传统特征结合后参与重复缺陷报告的检测。在著名开源项目Eclipse、NetBeans 和Open Office的缺陷报告库上验证了D_BBAS方法的有效性,其中包含超过50万个缺陷报告。实验结果表明,相比于代表性方法,该方法的F1值平均提升了1.7%,证明了D_BBAS方法的有效性。  相似文献   

6.
ContextThe component field in a bug report provides important location information required by developers during bug fixes. Research has shown that incorrect component assignment for a bug report often causes problems and delays in bug fixes. A topic model technique, Latent Dirichlet Allocation (LDA), has been developed to create a component recommender for bug reports.ObjectiveWe seek to investigate a better way to use topic modeling in creating a component recommender.MethodThis paper presents a component recommender by using the proposed Discriminative Probability Latent Semantic Analysis (DPLSA) model and Jensen–Shannon divergence (DPLSA-JS). The proposed DPLSA model provides a novel method to initialize the word distributions for different topics. It uses the past assigned bug reports from the same component in the model training step. This results in a correlation between the learned topics and the components.ResultsWe evaluate the proposed approach over five open source projects, Mylyn, Gcc, Platform, Bugzilla and Firefox. The results show that the proposed approach on average outperforms the LDA-KL method by 30.08%, 19.60% and 14.13% for recall @1, recall @3 and recall @5, outperforms the LDA-SVM method by 31.56%, 17.80% and 8.78% for recall @1, recall @3 and recall @5, respectively.ConclusionOur method discovers that using comments in the DPLSA-JS recommender does not always make a contribution to the performance. The vocabulary size does matter in DPLSA-JS. Different projects need to adaptively set the vocabulary size according to an experimental method. In addition, the correspondence between the learned topics and components in DPLSA increases the discriminative power of the topics which is useful for the recommendation task.  相似文献   

7.
王燕  吴化尧  聂长海  徐家喜  尹震  钮鑫涛 《软件学报》2022,33(11):3983-4007
缺陷追踪是软件项目管理的一个重要环节,是保证现代大规模开源软件开发顺利进行并持续提高软件质量的必要手段.目前,大部分开源软件都使用开放的缺陷跟踪系统进行软件缺陷的管理.它允许用户向开发者提交系统故障(即defect类型缺陷)以及系统改进建议(即enhancement类型缺陷),但是这些用户的反馈所起的作用尚未得到充分研究.针对这一问题,对Firefox的缺陷跟踪系统进行实证研究,收集了2018年和2019年提交的19 474份Firefox Desktop以及3 057份Firefox for Android缺陷报告.在此基础上,对比分析了普通用户和核心开发者提交的缺陷在数量、严重性、组件分布、修复率、修复速度以及修复者上的差别,并调查了缺陷报告的撰写质量与缺陷处理结果和修复时间的关系.主要发现包括:(1)当前缺陷追踪系统中普通用户人数众多,但参与程度较浅,86%的用户只提交过一个缺陷,其中,高严重等级的缺陷不超过3%;(2)普通用户提交的缺陷主要分布在和用户交互相关的UI组件上(例如地址栏、音频/视频等),然而还有43%的缺陷由于缺乏充分描述信息而难以准确地定位到具体的关联组件;(3)在缺陷处理结果上,由于查重系统以及缺陷填报系统在设计上过于简单,致使普通用户提交的大量缺陷被处理为“无用”缺陷,缺陷修复率低于10%;(4)在缺陷修复流程上,由于普通用户难以准确、充分地描述缺陷,导致系统对其重视程度不足,普通用户提交缺陷的处理流程也比核心开发者提交的复杂,平均需要多花至少8天的时间进行修复.上述研究结果揭示了当前缺陷追踪系统在用户参与激励机制、缺陷自动查重以及缺陷报告填写智能辅助等方面的不足,能够为缺陷跟踪系统开发者和管理者改进系统、提高普通用户对开源软件的贡献提供参考.  相似文献   

8.
The large number of new bug reports received in bug repositories of software systems makes their management a challenging task.Handling these reports manually is time consuming,and often results in delaying the resolution of important bugs.To address this issue,a recommender may be developed which automatically prioritizes the new bug reports.In this paper,we propose and evaluate a classification based approach to build such a recommender.We use the Na¨ ve Bayes and Support Vector Machine (SVM) classifiers,and present a comparison to evaluate which classifier performs better in terms of accuracy.Since a bug report contains both categorical and text features,another evaluation we perform is to determine the combination of features that better determines the priority of a bug.To evaluate the bug priority recommender,we use precision and recall measures and also propose two new measures,Nearest False Negatives (NFN) and Nearest False Positives (NFP),which provide insight into the results produced by precision and recall.Our findings are that the results of SVM are better than the Na¨ ve Bayes algorithm for text features,whereas for categorical features,Na¨ ve Bayes performance is better than SVM.The highest accuracy is achieved with SVM when categorical and text features are combined for training.  相似文献   

9.
ContextBug report assignment, namely, to assign new bug reports to developers for timely and effective bug resolution, is crucial for software quality assurance. However, with the increasing size of software system, it is difficult to assign bugs to appropriate developers for bug managers.ObjectiveThis paper propose an approach, called KSAP (K-nearest-neighbor search and heterogeneous proximity), to improve automatic bug report assignment by using historical bug reports and heterogeneous network of bug repository.MethodWhen a new bug report was submitted to the bug repository, KSAP assigns developers for the bug report by using a two-phase procedure. The first phase is to search historically-resolved similar bug reports to the new bug report by K-nearest-neighbor (KNN) method. The second phase is to rank the developers who contributed to those similar bug reports by heterogeneous proximity.ResultsWe collected bug repositories of Mozilla, Eclipse, Apache Ant and Apache Tomcat6 projects to investigate the performance of the proposed KSAP approach. Experimental results demonstrate that KSAP can improve the recall of bug report assignment between 7.5–32.25% in comparison with the state of art techniques. When there is only a small number of developer collaborations on common bug reports, KSAP has shown its excellence over other sate of art techniques. When we tune the parameters of the number of historically-resolved similar bug reports (K) and the number of developers (Q) for recommendation, KSAP keeps its superiority steadily.ConclusionThis is the first paper to demonstrate how to automatically build heterogeneous network of a bug repository and extract meta-paths of developer collaborations from the heterogeneous network for bug report assignment.  相似文献   

10.
The WHO Collaborating Centre for International Drug Monitoring in Uppsala, Sweden, maintains and analyses the world’s largest database of reports on suspected adverse drug reaction (ADR) incidents that occur after drugs are on the market. The presence of duplicate case reports is an important data quality problem and their detection remains a formidable challenge, especially in the WHO drug safety database where reports are anonymised before submission. In this paper, we propose a duplicate detection method based on the hit-miss model for statistical record linkage described by Copas and Hilton, which handles the limited amount of training data well and is well suited for the available data (categorical and numerical rather than free text). We propose two extensions of the standard hit-miss model: a hit-miss mixture model for errors in numerical record fields and a new method to handle correlated record fields, and we demonstrate the effectiveness both at identifying the most likely duplicate for a given case report (94.7% accuracy) and at discriminating true duplicates from random matches (63% recall with 71% precision). The proposed method allows for more efficient data cleaning in post-marketing drug safety data sets, and perhaps other knowledge discovery applications as well. Responsible editor: Hannu Toivonen.  相似文献   

11.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   

12.
软件缺陷在软件开发过程中不可避免,提交的缺陷报告则是分析和修复缺陷的重要信息来源。开发人员常通过借鉴相似的历史缺陷报告和修复信息来辅助对当前新缺陷的分析和修复。文中提出了一种知识驱动的相似缺陷报告推荐方法。该方法首先利用信息检索和Word Embedding技术构建缺陷知识图谱;然后利用TF-IDF和Word Embedding技术计算缺陷报告之间的文本相似度,同时综合考虑缺陷的各项属性,从而得到缺陷报告之间的主次要属性相似度;最后将上述相似度融合成综合相似度,利用综合相似度推荐相似缺陷报告。实验结果表明,与基线方法相比,在Firefox数据集上所提方法的性能平均提高了12.7%。  相似文献   

13.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

14.
重复缺陷报告检测能够避免对描述同一缺陷的多份报告进行重复的任务分派和修复,可降低软件维护成本。为了进一步提高检测的准确率,提出一种融合文本分布式表示的重复缺陷报告检测方法。首先,基于大规模缺陷报告数据库训练Doc2Vec模型并抽取缺陷报告的分布式表示,将不同长度的缺陷报告编码为统一长度的稠密向量。接着,通过比较这些向量来计算不同缺陷报告的相似程度,将其作为一种新特征与重复缺陷报告检测过程常用的其它特征进行融合,并利用机器学习算法训练二元分类模型。在公开的Bugzilla重复缺陷报告数据集上的实验结果表明,相比于代表性方法D_TS,本文方法的F1值平均提升了2%,说明了新特征的有效性。  相似文献   

15.
State-of-the-art near-duplicate video clip (NDVC) detection for novelty re-ranking uses non-semantic low-level features (color/texture) to detect and eliminate “content-based NDVC” and increases content level novelty in the top results. However, humans may perceive a video as near duplicate from a semantic perspective as well. In this paper, we propose concept-based near-duplicate video clip (CBNDVC) detection technique for novelty re-ranking. We identify “semantic NDVC”, making use of the semantic features (events/concepts) and re-rank the top results to increase the content as well as semantic novelty. Videos are represented as a multivariate time series of confidence values of relevant concepts and thereafter discovery of CBNDVC clusters is achieved by conceptual clustering. Obtained results show higher precision and recall from the user’s perspective.  相似文献   

16.
17.
18.
软件缺陷在软件的开发和维护过程中是不可避免的,软件缺陷报告是软件维护过程中重要的缺陷描述文档,高质量的软件缺陷报告可以有效提高软件缺陷修复的效率.然而,由于存在许多开发人员、测试人员和用户与缺陷跟踪系统交互并提交软件缺陷报告,同一个软件缺陷可能被不同的人员报告,导致了大量重复的软件缺陷报告.重复的软件缺陷报告势必加重人工检测重复缺陷报告的工作量,并造成人力物力的浪费,降低了软件缺陷修复的效率.以系统文献调研的方式,对近年来国内外学者在重复软件缺陷报告检测领域的研究工作进行了系统的分析.主要从研究方法、数据集的选取、性能评价等方面具体分析总结,并提出该领域在后续研究中存在的问题、挑战以及建议.  相似文献   

19.
Information Retrieval (IR) approaches, such as Latent Semantic Indexing (LSI) and Vector Space Model (VSM), are commonly applied to recover software traceability links. Recently, an approach based on developers’ eye gazes was proposed to retrieve traceability links. This paper presents a comparative study on IR and eye-gaze based approaches. In addition, it reports on the possibility of using eye gaze links as an alternative benchmark in comparison to commits. The study conducted asked developers to perform bug-localization tasks on the open source subject system JabRef. The iTrace environment, which is an eye tracking enabled Eclipse plugin, was used to collect eye gaze data. During the data collection phase, an eye tracker was used to gather the source code entities (SCE’s), developers looked at while solving these tasks. We present an algorithm that uses the collected gaze dataset to produce candidate traceability links related to the tasks. In the evaluation phase, we compared the results of our algorithm with the results of an IR technique, in two different contexts. In the first context, precision and recall metric values are reported for both IR and eye gaze approaches based on commits. In the second context, another set of developers were asked to rate the candidate links from each of the two techniques in terms of how useful they were in fixing the bugs. The eye gaze approach outperforms standard LSI and VSM approaches and reports a 55 % precision and 67 % recall on average for all tasks when compared to how the developers actually fixed the bug. In the second context, the usefulness results show that links generated by our algorithm were considered to be significantly more useful (to fix the bug) than those of the IR technique in a majority of tasks. We discuss the implications of this radically different method of deriving traceability links. Techniques for feature location/bug localization are commonly evaluated on benchmarks formed from commits as is done in the evaluation phase of this study. Although, commits are a reasonable source, they only capture entities that were eventually changed to fix a bug or resolve a feature. We investigate another type of benchmark based on eye tracking data, namely links generated from the bug-localization tasks given to the developers in the data collection phase. The source code entities relevant to subjected bugs recommended from IR methods are evaluated on both commits and links generated from eye gaze. The results of the benchmarking phase show that the use of eye tracking could form an effective (complementary) benchmark and add another interesting perspective in the evaluation of bug-localization techniques.  相似文献   

20.
Previous research has provided evidence that a combination of static code metrics and software history metrics can be used to predict with surprising success which files in the next release of a large system will have the largest numbers of defects. In contrast, very little research exists to indicate whether information about individual developers can profitably be used to improve predictions. We investigate whether files in a large system that are modified by an individual developer consistently contain either more or fewer faults than the average of all files in the system. The goal of the investigation is to determine whether information about which particular developer modified a file is able to improve defect predictions. We also extend earlier research evaluating use of counts of the number of developers who modified a file as predictors of the file’s future faultiness. We analyze change reports filed for three large systems, each containing 18 releases, with a combined total of nearly 4 million LOC and over 11,000 files. A buggy file ratio is defined for programmers, measuring the proportion of faulty files in Release R out of all files modified by the programmer in Release R-1. We assess the consistency of the buggy file ratio across releases for individual programmers both visually and within the context of a fault prediction model. Buggy file ratios for individual programmers often varied widely across all the releases that they participated in. A prediction model that takes account of the history of faulty files that were changed by individual developers shows improvement over the standard negative binomial model of less than 0.13% according to one measure, and no improvement at all according to another measure. In contrast, augmenting a standard model with counts of cumulative developers changing files in prior releases produced up to a 2% improvement in the percentage of faults detected in the top 20% of predicted faulty files. The cumulative number of developers interacting with a file can be a useful variable for defect prediction. However, the study indicates that adding information to a model about which particular developer modified a file is not likely to improve defect predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号