首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
In practice, some bugs have more impact than others and thus deserve more immediate attention. Due to tight schedule and limited human resources, developers may not have enough time to inspect all bugs. Thus, they often concentrate on bugs that are highly impactful. In the literature, high-impact bugs are used to refer to the bugs which appear at unexpected time or locations and bring more unexpected effects (i.e., surprise bugs), or break pre-existing functionalities and destroy the user experience (i.e., breakage bugs). Unfortunately, identifying high-impact bugs from thousands of bug reports in a bug tracking system is not an easy feat. Thus, an automated technique that can identify high-impact bug reports can help developers to be aware of them early, rectify them quickly, and minimize the damages they cause. Considering that only a small proportion of bugs are high-impact bugs, the identification of high-impact bug reports is a difficult task. In this paper, we propose an approach to identify high-impact bug reports by leveraging imbalanced learning strategies. We investigate the effectiveness of various variants, each of which combines one particular imbalanced learning strategy and one particular classification algorithm. In particular, we choose four widely used strategies for dealing with imbalanced data and four state-of-the-art text classification algorithms to conduct experiments on four datasets from four different open source projects. We mainly perform an analytical study on two types of high-impact bugs, i.e., surprise bugs and breakage bugs. The results show that different variants have different performances, and the best performing variants SMOTE (synthetic minority over-sampling technique) + KNN (K-nearest neighbours) for surprise bug identification and RUS (random under-sampling) + NB (naive Bayes) for breakage bug identification outperform the F1-scores of the two state-of-the-art approaches by Thung et al. and Garcia and Shihab.  相似文献   

2.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   

3.
ContextBug report assignment, namely, to assign new bug reports to developers for timely and effective bug resolution, is crucial for software quality assurance. However, with the increasing size of software system, it is difficult to assign bugs to appropriate developers for bug managers.ObjectiveThis paper propose an approach, called KSAP (K-nearest-neighbor search and heterogeneous proximity), to improve automatic bug report assignment by using historical bug reports and heterogeneous network of bug repository.MethodWhen a new bug report was submitted to the bug repository, KSAP assigns developers for the bug report by using a two-phase procedure. The first phase is to search historically-resolved similar bug reports to the new bug report by K-nearest-neighbor (KNN) method. The second phase is to rank the developers who contributed to those similar bug reports by heterogeneous proximity.ResultsWe collected bug repositories of Mozilla, Eclipse, Apache Ant and Apache Tomcat6 projects to investigate the performance of the proposed KSAP approach. Experimental results demonstrate that KSAP can improve the recall of bug report assignment between 7.5–32.25% in comparison with the state of art techniques. When there is only a small number of developer collaborations on common bug reports, KSAP has shown its excellence over other sate of art techniques. When we tune the parameters of the number of historically-resolved similar bug reports (K) and the number of developers (Q) for recommendation, KSAP keeps its superiority steadily.ConclusionThis is the first paper to demonstrate how to automatically build heterogeneous network of a bug repository and extract meta-paths of developer collaborations from the heterogeneous network for bug report assignment.  相似文献   

4.
ContextBug fixing is an integral part of software development and maintenance. A large number of bugs often indicate poor software quality, since buggy behavior not only causes failures that may be costly but also has a detrimental effect on the user’s overall experience with the software product. The impact of long lived bugs can be even more critical since experiencing the same bug version after version can be particularly frustrating for user. While there are many studies that investigate factors affecting bug fixing time for entire bug repositories, to the best of our knowledge, none of these studies investigates the extent and reasons of long lived bugs.ObjectiveIn this paper, we investigate the triaging and fixing processes of long lived bugs so that we can identify the reasons for delay and improve the overall bug fixing process.MethodologyWe mine the bug repositories of popular open source projects, and analyze long lived bugs from five different perspectives: their proportion, severity, assignment, reasons, as well as the nature of fixes.ResultsOur study on seven open-source projects shows that there are a considerable number of long lived bugs in each system and over 90% of them adversely affect the user’s experience. The reasons for these long lived bugs are diverse including long assignment time, not understanding their importance in advance, etc. However, many bug-fixes were delayed without any specific reasons. Furthermore, 40% of long lived bugs need only small fixes.ConclusionOur overall results suggest that a significant number of long lived bugs may be minimized through careful triaging and prioritization if developers could predict their severity, change effort, and change impact in advance. We believe our results will help both developers and researchers better to understand factors behind delays, improve the overall bug fixing process, and investigate analytical approaches for prioritizing bugs based on bug severity as well as expected bug fixing effort.  相似文献   

5.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

6.
The issue-tracking systems used by software projects contain issues, bugs, or tickets written by a wide variety of bug reporters, with different levels of training and knowledge about the system under development. Typically, reporters lack the skills and/or time to search the issue-tracking system for similar issues already reported. As a result, many reports end up referring to the same issue, which effectively makes the bug-report triaging process time consuming and error prone. Many researchers have approached the bug-deduplication problem using off-the-shelf information-retrieval (IR) tools. In this work, we extend the state of the art by investigating how contextual information about software-quality attributes, software-architecture terms, and system-development topics can be exploited to improve bug deduplication. We demonstrate the effectiveness of our contextual bug-deduplication method at ranking duplicates on the bug repositories of the Android, Eclipse, Mozilla, and OpenOffice software systems. Based on this experience, we conclude that taking into account domain-specific context can improve IR methods for bug deduplication.  相似文献   

7.
Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports data that is stored in software repositories. However, all prior efforts assume that such human-assigned data is reliable. Hence a perfect automated approach should be able to assign the same severity label as in the repository – achieving a 100% accuracy. Looking at duplicate bug reports (i.e., reports referring to the same problem) from three open-source software systems (OpenOffice, Mozilla, and Eclipse), we find that around 51 % of the duplicate bug reports have inconsistent human-assigned severity labels even though they refer to the same software problem. While our results do indicate that duplicate bug reports have unreliable severity labels, we believe that they send warning signals about the reliability of the full bug severity data (i.e., including non-duplicate reports). Future research efforts should explore if our findings generalize to the full dataset. Moreover, they should factor in the unreliable nature of the bug severity data. Given the unreliable nature of the severity data, classical metrics to assess the accuracy of models/learners should not be used for assessing the accuracy of approaches for automated assigning severity label. Hence, we propose a new approach to assess the performance of such models. Our new assessment approach shows that current automated approaches perform well – 77-86 % agreement with human-assigned severity labels.  相似文献   

8.
陈理国  刘超 《软件学报》2014,25(6):1169-1179
在软件系统中,缺陷定位是缺陷修复的一个关键环节,如果能将缺陷自动定位到很小的范围,将会极大地降低缺陷修复的难度.基于高斯过程提出了一种缺陷定位方法(GPBL),即针对每个缺陷,向开发人员推荐这个缺陷可能存在于哪些源文件中,从而帮助开发人员快速修复缺陷.为了验证方法的有效性,采集了开源软件Eclipse 和Argouml 中的数据,实验结果表明,高斯过程缺陷定位的查全率和查准率平均分别为87.16%和78.90%.与基于LDA的缺陷定位方法进行比较,表明高斯过程更能准确定位缺陷的位置.  相似文献   

9.
Toward an understanding of bug fix patterns   总被引:1,自引:1,他引:0  
Twenty-seven automatically extractable bug fix patterns are defined using the syntax components and context of the source code involved in bug fix changes. Bug fix patterns are extracted from the configuration management repositories of seven open source projects, all written in Java (Eclipse, Columba, JEdit, Scarab, ArgoUML, Lucene, and MegaMek). Defined bug fix patterns cover 45.7% to 63.3% of the total bug fix hunk pairs in these projects. The frequency of occurrence of each bug fix pattern is computed across all projects. The most common individual patterns are MC-DAP (method call with different actual parameter values) at 14.9–25.5%, IF-CC (change in if conditional) at 5.6–18.6%, and AS-CE (change of assignment expression) at 6.0–14.2%. A correlation analysis on the extracted pattern instances on the seven projects shows that six have very similar bug fix pattern frequencies. Analysis of if conditional bug fix sub-patterns shows a trend towards increasing conditional complexity in if conditional fixes. Analysis of five developers in the Eclipse projects shows overall consistency with project-level bug fix pattern frequencies, as well as distinct variations among developers in their rates of producing various bug patterns. Overall, data in the paper suggest that developers have difficulty with specific code situations at surprisingly consistent rates. There appear to be broad mechanisms causing the injection of bugs that are largely independent of the type of software being produced.
E. James Whitehead Jr.Email:
  相似文献   

10.
ContextEffort-aware models, e.g., effort-aware bug prediction models aim to help practitioners identify and prioritize buggy software locations according to the effort involved with fixing the bugs. Since the effort of current bugs is not yet known and the effort of past bugs is typically not explicitly recorded, effort-aware bug prediction models are forced to use approximations, such as the number of lines of code (LOC) of the predicted files.ObjectiveAlthough the choice of these approximations is critical for the performance of the prediction models, there is no empirical evidence on whether LOC is actually a good approximation. Therefore, in this paper, we investigate the question: is LOC a good measure of effort for use in effort-aware models?MethodWe perform an empirical study on four open source projects, for which we obtain explicitly-recorded effort data, and compare the use of LOC to various complexity, size and churn metrics as measures of effort.ResultsWe find that using a combination of complexity, size and churn metrics are a better measure of effort than using LOC alone. Furthermore, we examine the impact of our findings on previous effort-aware bug prediction work and find that using LOC as a measure for effort does not significantly affect the list of files being flagged, however, using LOC under-estimates the amount of effort required compared to our best effort predictor by approximately 66%.ConclusionStudies using effort-aware models should not assume that LOC is a good measure of effort. For the case of effort-aware bug prediction, using LOC provides results that are similar to combining complexity, churn, size and LOC as a proxy for effort when prioritizing the most risky files. However, we find that for the purpose of effort-estimation, using LOC may under-estimate the amount of effort required.  相似文献   

11.
Nowadays, many software organizations rely on automatic problem reporting tools to collect crash reports directly from users’ environments. These crash reports are later grouped together into crash types. Usually, developers prioritize crash types based on the number of crash reports and file bug reports for the top crash types. Because a bug can trigger a crash in different usage scenarios, different crash types are sometimes related to the same bug. Two bugs are correlated when the occurrence of one bug causes the other bug to occur. We refer to a group of crash types related to identical or correlated bug reports, as a crash correlation group. In this paper, we propose five rules to identify correlated crash types automatically. We propose an algorithm to locate and rank buggy files using crash correlation groups. We also propose a method to identify duplicate and related bug reports. Through an empirical study on Firefox and Eclipse, we show that the first three rules can identify crash correlation groups using stack trace information, with a precision of 91 % and a recall of 87 % for Firefox and a precision of 76 % and a recall of 61 % for Eclipse. On the top three buggy file candidates, the proposed bug localization algorithm achieves a recall of 62 % and a precision of 42 % for Firefox, and a recall of 52 % and a precision of 50 % for Eclipse. On the top 10 buggy file candidates, the recall increases to 92 % for Firefox and 90 % for Eclipse. The proposed duplicate bug report identification method achieves a recall of 50 % and a precision of 55 % on Firefox, and a recall of 47 % and a precision of 35 % on Eclipse. Developers can combine the proposed crash correlation rules with the new bug localization algorithm to identify and fix correlated crash types all together. Triagers can use the duplicate bug report identification method to reduce their workload by filtering duplicate bug reports automatically.  相似文献   

12.
席圣渠  姚远  徐锋  吕建 《软件学报》2018,29(8):2322-2335
随着开源软件项目规模的不断增大,人工为缺陷报告分派合适的开发人员(缺陷分派)变得越来越困难.而不合适的缺陷分派往往会严重影响缺陷修复的效率,为此迫切需要一种缺陷分派辅助技术帮助项目管理者更好地完成缺陷分派任务.当前,大部分研究工作都基于缺陷报告文本以及相关元数据信息分析来刻画开发者的特征,忽略了对开发者活跃度的考虑,使得对具有相似特征的开发者进行缺陷报告分派预测时表现较差.本文提出了一个基于循环神经网络的深度学习模型DeepTriage,一方面利用双向循环网络加池化方法提取缺陷报告的文本特征,一方面利用单向循环网络提取特定时刻的开发者活跃度特征,并融合两者,利用已修复的缺陷报告进行监督学习.在Eclipse等四个不同的开源项目数据集上的实验结果表明,DeepTriage较同类工作在缺陷分派预测准确率上有显著提升.  相似文献   

13.
软件缺陷报告的严重性对缺陷的解决具有关键作用。随着软件规模的不断扩大,使用开源的软件缺陷跟踪系统成为海量缺陷信息数据的主要处理方法。分析缺陷报告严重性在数据仓库中的作用,是处理软件缺陷的重要内容。通过对Bugzilla缺陷跟踪系统数据的研究和分析,发现不同项目的属性特征差异较大,同时在修复率、解决时长、开发者、组件等属性上的统计特征具有一致性。对Mozilla项目和Eclipse项目的数据进行系统分析,并根据不同组件和项目中严重性程度分布情况,认为软件缺陷报告严重性程度的提升会导致缺陷修复率的提高,同时严重性程度为normal级别的缺陷解决时长最短,开发者持有缺陷的数量越高其修复率越低。  相似文献   

14.
Developers occasionally make more than one patch to fix a bug. The related patches sometimes are intentionally separated, but unintended omission errors require supplementary patches. Several change recommendation systems have been suggested based on clone analysis, structural dependency, and historical change coupling in order to reduce or prevent incomplete patches. However, very few studies have examined the reason that incomplete patches occur and how real-world omission errors could be reduced. This paper systematically studies a group of bugs that were fixed more than once in open source projects in order to understand the characteristics of incomplete patches. Our study on Eclipse JDT core, Eclipse SWT, Mozilla, and Equinox p2 showed that a significant portion of the resolved bugs require more than one attempt to fix. Compared to single-fix bugs, the multi-fix bugs did not have a lower quality of bug reports, but more attribute changes (i.e., cc’ed developers or title) were made to the multi-fix bugs than to the single-fix bugs. Multi-fix bugs are more likely to have high severity levels than single-fix bugs. Hence, more developers have participated in discussions about multi-fix bugs compared to single-fix bugs. Multi-fix bugs take more time to resolve than single-fix bugs do. Incomplete patches are longer and more scattered, and they are related to more files than regular patches are. Our manual inspection showed that the causes of incomplete patches were diverse, including missed porting updates, incorrect handling of conditional statements, and incomplete refactoring. Our investigation showed that only 7 % to 17 % of supplementary patches had content similar to their initial patches, which implies that supplementary patch locations cannot be predicted by code clone analysis alone. Furthermore, 16 % to 46 % of supplementary patches were beyond the scope of the immediate structural dependency of their initial patch locations. Historical co-change patterns also showed low precision in predicting supplementary patch locations. Code clones, structural dependencies, and historical co-change analyses predicted different supplementary patch locations, and there was little overlap between them. Combining these analyses did not cover all supplementary patch locations. The present study investigates the characteristics of incomplete patches and multi-fix bugs, which have not been systematically examined in previous research. We reveal that predicting supplementary patch is a difficult problem that existing change recommendation approaches could not cover. New type of approaches should be developed and validated on a supplementary patch data set, which developers failed to make the complete patches at once in practice.  相似文献   

15.
The probabilistic reasoning scheme of Dempster-Shafer theory provides a remarkably efficient bug identification algorithm for a hierarchical Buggy model. In the particular Buggy model generated by the repair theory of Brown & Van Lehn (1980, A generative theory of bugs in procedural skills, Cognitive Science, 2, 155-192), both Shafer & Logan (1987, Implementing Dempster's rule for hierarchical evidence, Artificial Intelligence, 33 , 271-298) and Gordon & Shortliffe (1985, A method for managing evidential reasoning in a hierarchical hypothesis space, Artificial Intelligence, 26, 324-357) schemes have provided almost identical computational accuracy although the latter involves an approximation of a "smallest superset". If n denotes the number of bugs to be identified, the computational complexity of the two schemes, originally of O (n4/3) and O(n2) respectively, can be improved to O(n) using the simplified top-down calculation scheme whereby from among all the nodes we first locate the particular "parental" node to which the bug belongs and then the bug itself among the sibs within the node. On average, about 5-7 problems are adequate to raise the belief function of the bug to 95% level, based on the evidential reasoning schemes.  相似文献   

16.
ContextSome recent static techniques for automatic bug localization have been built around modern information retrieval (IR) models such as latent semantic indexing (LSI). Latent Dirichlet allocation (LDA) is a generative statistical model that has significant advantages, in modularity and extensibility, over both LSI and probabilistic LSI (pLSI). Moreover, LDA has been shown effective in topic model based information retrieval. In this paper, we present a static LDA-based technique for automatic bug localization and evaluate its effectiveness.ObjectiveWe evaluate the accuracy and scalability of the LDA-based technique and investigate whether it is suitable for use with open-source software systems of varying size, including those developed using agile methods.MethodWe present five case studies designed to determine the accuracy and scalability of the LDA-based technique, as well as its relationships to software system size and to source code stability. The studies examine over 300 bugs across more than 25 iterations of three software systems.ResultsThe results of the studies show that the LDA-based technique maintains sufficient accuracy across all bugs in a single iteration of a software system and is scalable to a large number of bugs across multiple revisions of two software systems. The results of the studies also indicate that the accuracy of the LDA-based technique is not affected by the size of the subject software system or by the stability of its source code base.ConclusionWe conclude that an effective static technique for automatic bug localization can be built around LDA. We also conclude that there is no significant relationship between the accuracy of the LDA-based technique and the size of the subject software system or the stability of its source code base. Thus, the LDA-based technique is widely applicable.  相似文献   

17.
Machine learning (ML) techniques and algorithms have been successfully and widely used in various areas including software engineering tasks. Like other software projects, bugs are also common in ML projects and libraries. In order to more deeply understand the features related to bug fixing in ML projects, we conduct an empirical study with 939 bugs from five ML projects by manually examining the bug categories, fixing patterns, fixing scale, fixing duration, and types of maintenance. The results show that (1) there are commonly seven types of bugs in ML programs; (2) twelve fixing patterns are typically used to fix the bugs in ML programs; (3) 68.80% of the patches belong to micro-scale-fix and small-scale-fix; (4) 66.77% of the bugs in ML programs can be fixed within one month; (5) 45.90% of the bug fixes belong to corrective activity from the perspective of software maintenance. Moreover, we perform a questionnaire survey and send them to developers or users of ML projects to validate the results in our empirical study. The results of our empirical study are basically consistent with the feedback from developers. The findings from the empirical study provide useful guidance and insights for developers and users to effectively detect and fix bugs in MLprojects.  相似文献   

18.
王燕  吴化尧  聂长海  徐家喜  尹震  钮鑫涛 《软件学报》2022,33(11):3983-4007
缺陷追踪是软件项目管理的一个重要环节,是保证现代大规模开源软件开发顺利进行并持续提高软件质量的必要手段.目前,大部分开源软件都使用开放的缺陷跟踪系统进行软件缺陷的管理.它允许用户向开发者提交系统故障(即defect类型缺陷)以及系统改进建议(即enhancement类型缺陷),但是这些用户的反馈所起的作用尚未得到充分研究.针对这一问题,对Firefox的缺陷跟踪系统进行实证研究,收集了2018年和2019年提交的19 474份Firefox Desktop以及3 057份Firefox for Android缺陷报告.在此基础上,对比分析了普通用户和核心开发者提交的缺陷在数量、严重性、组件分布、修复率、修复速度以及修复者上的差别,并调查了缺陷报告的撰写质量与缺陷处理结果和修复时间的关系.主要发现包括:(1)当前缺陷追踪系统中普通用户人数众多,但参与程度较浅,86%的用户只提交过一个缺陷,其中,高严重等级的缺陷不超过3%;(2)普通用户提交的缺陷主要分布在和用户交互相关的UI组件上(例如地址栏、音频/视频等),然而还有43%的缺陷由于缺乏充分描述信息而难以准确地定位到具体的关联组件;(3)在缺陷处理结果上,由于查重系统以及缺陷填报系统在设计上过于简单,致使普通用户提交的大量缺陷被处理为“无用”缺陷,缺陷修复率低于10%;(4)在缺陷修复流程上,由于普通用户难以准确、充分地描述缺陷,导致系统对其重视程度不足,普通用户提交缺陷的处理流程也比核心开发者提交的复杂,平均需要多花至少8天的时间进行修复.上述研究结果揭示了当前缺陷追踪系统在用户参与激励机制、缺陷自动查重以及缺陷报告填写智能辅助等方面的不足,能够为缺陷跟踪系统开发者和管理者改进系统、提高普通用户对开源软件的贡献提供参考.  相似文献   

19.
解铮  黎铭 《软件学报》2017,28(11):3072-3079
在大型软件项目的开发与维护中,从大量的代码文件中定位软件缺陷费时、费力,有效地进行软件缺陷自动定位,将能极大地降低开发成本.软件缺陷报告通常包含了大量未发觉的软件缺陷的信息,精确地寻找与缺陷报告相关联的代码文件,对于降低维护成本具有重要意义.目前,已有一些基于深度神经网络的缺陷定位技术相对于传统方法,其效果有所提升,但相关工作大多关注网络结构的设计,缺乏对训练过程中损失函数的研究,而损失函数对于预测任务的性能会有极大的影响.在此背景下,提出了代价敏感的间隔分布优化(cost-sensitive margin distribution optimization,简称CSMDO)损失函数,并将代价敏感的间隔分布优化层应用到深度卷积神经网络中,能够良好地处理软件缺陷数据的不平衡性,进一步提高缺陷定位的准确度.  相似文献   

20.
张天伦  陈荣  杨溪  祝宏玉 《软件学报》2019,30(5):1386-1406
在所有的软件系统开发过程中,Bug的存在是不可避免的问题.对于软件系统的开发者来说,修复Bug最有利的工具就是Bug报告.但是人工识别Bug报告会给开发人员带来新的负担,因此,自动对Bug报告进行分类是一项很有必要的工作.基于此,提出用基于极速学习机的方法来对Bug报告进行分类.具体而言,主要解决Bug报告自动分类的3个问题:第1个是Bug报告数据集里不同类别的样本数量不平衡问题;第2个是Bug报告数据集里被标注的样本不充足问题;第3个是Bug报告数据集总体样本量不充足问题.为了解决这3个问题,分别引入了基于代价的有监督分类方法、基于模糊度的半监督学习方法以及样本迁移方法.通过在多个Bug报告数据集上进行实验,验证了这些方法的可行性和有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号