首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 454 毫秒
1.
胡庆辉  丁立新  何进荣 《软件学报》2013,24(11):2522-2534
在机器学习领域,核方法是解决非线性模式识别问题的一种有效手段.目前,用多核学习方法代替传统的单核学习已经成为一个新的研究热点,它在处理异构、不规则和分布不平坦的样本数据情况下,表现出了更好的灵活性、可解释性以及更优异的泛化性能.结合有监督学习中的多核学习方法,提出了基于Lp范数约束的多核半监督支持向量机(semi-supervised support vector machine,简称S3VM)的优化模型.该模型的待优化参数包括高维空间的决策函数fm和核组合权系数θm.同时,该模型继承了单核半监督支持向量机的非凸非平滑特性.采用双层优化过程来优化这两组参数,并采用改进的拟牛顿法和基于成对标签交换的局部搜索算法分别解决模型关于fm的非平滑及非凸问题,以得到模型近似最优解.在多核框架中同时加入基本核和流形核,以充分利用数据的几何性质.实验结果验证了算法的有效性及较好的泛化性能.  相似文献   

2.
基于增强稀疏性特征选择的网络图像标注   总被引:1,自引:0,他引:1  
史彩娟  阮秋琦 《软件学报》2015,26(7):1800-1811
面对网络图像的爆炸性增长,网络图像标注成为近年来一个热点研究内容,稀疏特征选择在提升网络图像标注效率和性能方面发挥着重要的作用.提出了一种增强稀疏性特征选择算法,即,基于l2,1/2矩阵范数和共享子空间的半监督稀疏特征选择算法(semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning,简称SFSLS)进行网络图像标注.在SFSLS算法中,应用l2,1/2矩阵范数来选取最稀疏和最具判别性的特征,通过共享子空间学习,考虑不同特征之间的关联信息.另外,基于图拉普拉斯的半监督学习,使SFSLS算法同时利用了有标签数据和无标签数据.设计了一种有效的迭代算法来最优化目标函数.SFSLS算法与其他稀疏特征选择算法在两个大规模网络图像数据库上进行了比较,结果表明,SFSLS算法更适合于大规模网络图像的标注.  相似文献   

3.
软件缺陷预测有助于提高软件开发质量,保证测试资源有效分配。针对软件缺陷预测研究中类标签数据难以获取和类不平衡分布问题,提出基于采样的半监督支持向量机预测模型。该模型采用无监督的采样技术,确保带标签样本数据中缺陷样本数量不会过低,使用半监督支持向量机方法,在少量带标签样本数据基础上利用无标签数据信息构建预测模型;使用公开的NASA软件缺陷预测数据集进行仿真实验。实验结果表明提出的方法与现有半监督方法相比,在综合评价指标[F]值和召回率上均优于现有方法;与有监督方法相比,能在学习样本较少的情况下取得相当的预测性能。  相似文献   

4.
刘鹏远  赵铁军 《软件学报》2009,20(5):1292-1300
为了解决困扰词义及译文消歧的数据稀疏及知识获取问题,提出一种基于Web利用n-gram统计语言模型进行消歧的方法.在提出词汇语义与其n-gram语言模型存在对应关系假设的基础上,首先利用Hownet建立中文歧义词的英文译文与知网DEF的对应关系并得到该DEF下的词汇集合,然后通过搜索引擎在Web上搜索,并以此计算不同DEF中词汇n-gram出现的概率,然后进行消歧决策.在国际语义评测SemEval-2007中的Multilingual Chinese English Lexical Sample Task测试集上的测试表明,该方法的Pmar值为55.9%,比其上该任务参评最好的无指导系统性能高出12.8%.  相似文献   

5.
正未标记学习仅使用无标签样本和正样本训练一个二分类器, 而生成式对抗网络(generative adversarial networks, GAN)中通过对抗性训练得到一个图像生成器. 为将GAN的对抗训练方法迁移到正未标记学习中以提升正未标记学习的效果, 可将GAN中的生成器替换为分类器C, 在无标签数据集中挑选样本以欺骗判别器D, 对CD进行迭代优化. 本文提出基于以Jensen-Shannon散度(JS散度)为目标函数的JS-PAN模型. 最后, 结合数据分布特点及现状需求, 说明了PAN模型在医疗诊断图像二分类应用的合理性及高性能. 在MNIST, CIFAR-10数据集上的实验结果显示: KL-PAN模型与同类正未标记学习模型对比有更高的精确度(ACC)及F1-score; 对称化改进后, JS-PAN模型在两个指标上均有所提升, 因此JS-PAN模型的提出更具有合理性. 在Med-MNIST的3个子图像数据集上的实验显示: KL-PAN模型与4个benchmark有监督模型有几乎相同的ACC, JS-PAN也有更高表现. 因此, 综合PAN模型的出色分类效果及医疗诊断数据的分布特征, PAN作为半监督学习方法可获得更快、更好的效果, 在医学图像的二分类的任务上具有更高的性能.  相似文献   

6.
陈翔  王秋萍 《计算机科学》2018,45(6):161-165
基于代码修改的缺陷预测,具有代码审查量少、缺陷定位和修复快的优点。文中首次将该问题建模为多目标优化问题,其中一个优化目标是最大化识别出的缺陷代码修改数,另一个优化目标是最小化需要审查的代码量。这两个优化目标之间存在一定的冲突,因此提出了MULTI方法,该方法可以生成一组具有非支配关系的预测模型。在实证研究中,考虑了6个大规模开源项目(累计227417个代码修改),以ACC和POPT作为评测预测性能的指标。实验结果表明,MULTI方法的预测性能均显著优于经典的有监督建模方法(EALR和Logistic)和无监督建模方法(LT和AGE)。  相似文献   

7.
半监督学习过程中,由于无标记样本的随机选择造成分类器性能降低及不稳定性的情况经常发生;同时,面对仅包含少量有标记样本的高维数据的分类问题,传统的半监督学习算法效果不是很理想.为了解决这些问题,本文从探索数据样本空间和特征空间两个角度出发,提出一种结合随机子空间技术和集成技术的安全半监督学习算法(A safe semi-supervised learning algorithm combining stochastic subspace technology and ensemble technology,S3LSE),处理仅包含极少量有标记样本的高维数据分类问题.首先,S3LSE采用随机子空间技术将高维数据集分解为B个特征子集,并根据样本间的隐含信息对每个特征子集优化,形成B个最优特征子集;接着,将每个最优特征子集抽样形成G个样本子集,在每个样本子集中使用安全的样本标记方法扩充有标记样本,生成G个分类器,并对G个分类器进行集成;然后,对B个最优特征子集生成的B个集成分类器再次进行集成,实现高维数据的分类.最后,使用高维数据集模拟半监督学习过程进行实验,实验结果表明S3LSE具有较好的性能.  相似文献   

8.
刘鹏远  赵铁军 《软件学报》2010,21(4):575-585
为解决困扰词义消歧及译文消歧任务中存在的数据稀疏及知识获取问题,提出一种利用双语词汇Web间接关联的完全无指导消歧方法.首先做出词汇歧义可由双语词汇的间接关联度决定的假设,为译文消歧提供了一种新的知识.在此基础上,对4种常用计算间接关联的方法进了改造并定义了双语词汇Web间接关联.随后进行基于Web的词汇消歧知识获取并设计了3种消歧决策方法.最后,在国际语义评测SemEval-2007中的Multilingual Chinese English Lexical Sample Task测试集进行了测试.该方法的Pmar值为44.4%,超过了该评测上最好的无指导系统的结果.  相似文献   

9.
针对模糊聚类需要预知最佳聚类个数的问题,提出了一种新的基于隶属比的聚类有效性指标Vnew,首先根据经典有效性指标的设计思路,充分考虑数据集合的隶属度矩阵特征和几何空间分布,通过重新定义类内距和类间距的方式,推导出基本的有效性指标;其次,定义隶属比的概念,放大基本有效性指标的计算值;最后,为了避免隶属比对有效性指标造成过分影响而失去意义,引入分类个数进行抑制. 理论分析和仿真实验表明,通过对相同数据集进行分析处理,与经典的XB指标相比Vxb,新指标Vnew具有更高的准确率和可靠性,在类间有重叠数据的情况下也能够做出正确的划分,具有一定的推广价值.  相似文献   

10.
单目深度估计是计算机视觉领域中的一个基本问题, 面片匹配与平面正则化网络(P2Net)是现阶段最先进的无监督单目深度估计方法之一. 由于P2Net中深度预测网络所采用的上采样方法为计算过程较为简单的最近邻插值算法, 使得预测深度图的生成质量较差. 因此, 本文基于多种上采样算法构建出残差上采样结构来替换原网络中的上采样层, 以获取更多特征信息, 提高物体结构的完整性. 在NYU-Depth V2数据集上的实验结果表明, 基于反卷积算法、双线性插值算法和像素重组算法的改进P2Net网络相较原网络在均方根误差RMSE指标上分别降低了2.25%、2.73%和3.05%. 本文的残差上采样结构提高了预测深度图的生成质量, 降低了预测误差.  相似文献   

11.
软件缺陷预测能够提高软件开发和测试的效率,保障软件质量。无监督缺陷预测方法具有不需要标签数据的特点,从而能够快速应用于工程实践中。提出了基于概率的无监督缺陷预测方法—PCLA,将度量元值与阈值的差值映射为概率,使用概率评估类存在缺陷的可能性,然后再通过聚类和标记来完成缺陷预测,以解决现有无监督方法直接根据阈值判断时对阈值比较敏感而引起的信息丢失问题。将PCLA方法应用在NetGen和Relink两组数据集,共7个软件项目上,实验结果表明PCLA方法在查全率、查准率、F-measure上相对现有无监督方法分别平均提升4.1%、2.52%、3.14%。  相似文献   

12.
面向对象软件度量是理解和保证面向对象软件质量的重要手段之一.通过将面向对象软件的度量值与其阈值比较,可简单直观评价其是否有可能包含缺陷.确定度量阈值方法主要有基于数据分布特征的无监督学习方法和基于缺陷相关性的有监督学习方法.两类方法各有利弊:无监督学习方法无需标签信息而易于实现,但所得阈值的缺陷预测性能通常较差;有监督学习方法通过机器学习算法提升所得阈值的缺陷预测性能,但标签信息在实际过程中不易获得且度量与缺陷链接技术复杂.近年来,两类方法的研究者不断探索并取得较大进展.同时,面向对象软件度量阈值确定方法研究仍存在一些亟待解决的挑战.对近年来国内外学者在该领域的研究成果进行系统性的总结.首先,阐述面向对象软件度量阈值确定方法的研究问题.其次,分别从无监督学习方法和有监督学习方法总结相关研究进展,并梳理具体的理论和实现的技术路径.然后,简要介绍面向对象软件度量阈值的其他相关技术.最后,总结当前该领域研究过程面临的挑战并给出建议的研究方向.  相似文献   

13.
倪超  陈翔  刘望舒  顾庆  黄启国  李娜 《软件学报》2019,30(5):1308-1329
在实际软件开发中,需要进行缺陷预测的项目可能是一个新启动项目,或者这个项目的历史训练数据较为稀缺.一种解决方案是利用其他项目(即源项目)已搜集的训练数据来构建模型,并完成对当前项目(即目标项目)的预测.但不同项目的数据集间会存在较大的分布差异性.针对该问题,从特征迁移和实例迁移角度出发,提出了一种两阶段跨项目缺陷预测方法FeCTrA.具体来说,在特征迁移阶段,该方法借助聚类分析选出源项目与目标项目之间具有高分布相似度的特征;在实例迁移阶段,该方法基于TrAdaBoost方法,借助目标项目中的少量已标注实例,从源项目中选出与这些已标注实例分布相近的实例.为了验证FeCTrA方法的有效性,选择Relink数据集和AEEEM数据集作为评测对象,以F1作为评测指标.首先,FeCTrA方法的预测性能要优于仅考虑特征迁移阶段或实例迁移阶段的单阶段方法;其次,与经典的跨项目缺陷预测方法TCA+、Peters过滤法、Burak过滤法以及DCPDP法相比,FeCTrA方法的预测性能在Relink数据集上可以分别提升23%、7.2%、9.8%和38.2%,在AEEEM数据集上可以分别提升96.5%、108.5%、103.6%和107.9%;最后,分析了FeCTrA方法内的影响因素对预测性能的影响,从而为有效使用FeCTrA方法提供了指南.  相似文献   

14.
Background and aim: Many sophisticated data mining and machine learning algorithms have been used for software defect prediction (SDP) to enhance the quality of software. However, real‐world SDP data sets suffer from class imbalance, which leads to a biased classifier and reduces the performance of existing classification algorithms resulting in an inaccurate classification and prediction. This work aims to improve the class imbalance nature of data sets to increase the accuracy of defect prediction and decrease the processing time . Methodology: The proposed model focuses on balancing the class of data sets to increase the accuracy of prediction and decrease processing time. It consists of a modified undersampling method and a correlation feature selection (CFS) method. Results: The results from ten open source project data sets showed that the proposed model improves the accuracy in terms of F1‐score to 0.52 ~ 0.96, and hence it is proximity reached best F1‐score value in 0.96 near to 1 then it is given a perfect performance in the prediction process. Conclusion: The proposed model focuses on balancing the class of data sets to increase the accuracy of prediction and decrease processing time using the proposed model.  相似文献   

15.
Software defects, produced inevitably in software projects, seriously affect the efficiency of software testing and maintenance. An appealing solution is the software defect prediction (SDP) that has achieved good performance in many software projects. However, the difference between features and the difference of the same feature between training data and test data may degrade defect prediction performance if such differences violate the model's assumption. To address this issue, we propose a SDP method based on feature transfer learning (FTL), which performs a transformation sequence for each feature in order to map the original features to another feature space. Specifically, FTL first uses the reinforcement learning scheme that automatically learns a strategy for transferring the potential feature knowledge from the training data. Then, we use the learned feature knowledge to inspire the transformation of the test data. The classifier is trained by the transformed training data and predicts defects for transformed test data. We evaluate the validity of FTL on 43 projects from PROMISE and NASA MDP using three classifiers, logistic regression, random forest, and Naive Bayes (NB). Experimental results indicate that FTL is better than the original classifiers and has the best performance on the NB classifier. For PROMISE, after using FTL, the average results of F1-score, AUC, MCC are 0.601, 0.757, and 0.350 respectively, which are 24.9%, 2.6%, and 16.7% higher than the original NB classifier results. The number of projects with improved performance accounts for 83.87%, 83.87%, and 64.52%. Similarly, FTL performs well on NASA MDP. Besides, compared with four feature engineering (FE) methods, FTL achieves an excellent improvement on most projects and the average performance is also better than or close to the FE methods.  相似文献   

16.
Software defect prediction is an important decision support activity in software quality assurance. The limitation of the labelled modules usually makes the prediction difficult, and the class‐imbalance characteristic of software defect data leads to negative influence on decision of classifiers. Semi‐supervised learning can build high‐performance classifiers by using large amount of unlabelled modules together with the labelled modules. Ensemble learning achieves a better prediction capability for class‐imbalance data by using a series of weak classifiers to reduce the bias generated by the majority class. In this paper, we propose a new semi‐supervised software defect prediction approach, non‐negative sparse‐based SemiBoost learning. The approach is capable of exploiting both labelled and unlabelled data and is formulated in a boosting framework. In order to enhance the prediction ability, we design a flexible non‐negative sparse similarity matrix, which can fully exploit the similarity of historical data by incorporating the non‐negativity constraint into sparse learning for better learning the latent clustering relationship among software modules. The widely used datasets from NASA projects are employed as test data to evaluate the performance of all compared methods. Experimental results show that non‐negative sparse‐based SemiBoost learning outperforms several representative state‐of‐the‐art semi‐supervised software defect prediction methods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
To facilitate developers in effective allocation of their testing and debugging efforts, many software defect prediction techniques have been proposed in the literature. These techniques can be used to predict classes that are more likely to be buggy based on the past history of classes, methods, or certain other code elements. These techniques are effective provided that a sufficient amount of data is available to train a prediction model. However, sufficient training data are rarely available for new software projects. To resolve this problem, cross-project defect prediction, which transfers a prediction model trained using data from one project to another, was proposed and is regarded as a new challenge in the area of defect prediction. Thus far, only a few cross-project defect prediction techniques have been proposed. To advance the state of the art, in this study, we investigated seven composite algorithms that integrate multiple machine learning classifiers to improve cross-project defect prediction. To evaluate the performance of the composite algorithms, we performed experiments on 10 open-source software systems from the PROMISE repository, which contain a total of 5,305 instances labeled as defective or clean. We compared the composite algorithms with the combined defect predictor where logistic regression is used as the meta classification algorithm (CODEP Logistic ), which is the most recent cross-project defect prediction algorithm in terms of two standard evaluation metrics: cost effectiveness and F-measure. Our experimental results show that several algorithms outperform CODEP Logistic : Maximum voting shows the best performance in terms of F-measure and its average F-measure is superior to that of CODEP Logistic by 36.88%. Bootstrap aggregation (BaggingJ48) shows the best performance in terms of cost effectiveness and its average cost effectiveness is superior to that of CODEP Logistic by 15.34%.  相似文献   

18.
Authorship attribution of text documents is a “hot” domain in research; however, almost all of its applications use supervised machine learning (ML) methods. In this research, we explore authorship attribution as a clustering problem, that is, we attempt to complete the task of authorship attribution using unsupervised machine learning methods. The application domain is responsa, which are answers written by well-known Jewish rabbis in response to various Jewish religious questions. We have built a corpus of 6,079 responsa, composed by five authors who lived mainly in the 20th century and containing almost 10 M words. The clustering tasks that have been performed were according to two or three or four or five authors. Clustering has been performed using three kinds of word lists: most frequent words (FW) including function words (stopwords), most frequent filtered words (FFW) excluding function words, and words with the highest variance values (HVW); and two unsupervised machine learning methods: K-means and Expectation Maximization (EM). The best clustering tasks according to two or three or four authors achieved results above 98%, and the improvement rates were above 40% in comparison to the “majority” (baseline) results. The EM method has been found to be superior to K-means for the discussed tasks. FW has been found as the best word list, far superior to FFW. FW, in contrast to FFW, includes function words, which are usually regarded as words that have little lexical meaning. This might imply that normalized frequencies of function words can serve as good indicators for authorship attribution using unsupervised ML methods. This finding supports previous findings about the usefulness of function words for other tasks, such as authorship attribution, using supervised ML methods, and genre and sentiment classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号