首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
针对传统分类器在数据不均衡的情况下分类效果不理想的缺陷,为提高分类器在不均衡数据集下的分类性能,特别是少数类样本的分类能力,提出了一种基于BSMOTE 和逆转欠抽样的不均衡数据分类算法。该算法使用BSMOTE进行过抽样,人工增加少数类样本的数量,然后通过优先去除样本中的冗余和噪声样本,使用逆转欠抽样方法逆转少数类样本和多数类样本的比例。通过多次进行上述抽样形成多个训练集合,使用Bagging方法集成在多个训练集合上获得的分类器来提高有效信息的利用率。实验表明,该算法较几种现有算法不仅能够提高少数类样本的分类性能,而且能够有效提高整体分类准确度。  相似文献   

2.
提出了一种使用基于规则的基分类器建立组合分类器的新方法PCARules。尽管新方法也采用基分类器预测的加权投票来决定待分类样本的类,但是为基分类器创建训练数据集的方法与bagging和boosting完全不同。该方法不是通过抽样为基分类器创建数据集,而是随机地将特征划分成K个子集,使用PCA得到每个子集的主成分,形成新的特征空间,并将所有训练数据映射到新的特征空间作为基分类器的训练集。在UCI机器学习库的30个随机选取的数据集上的实验表明:算法不仅能够显著提高基于规则的分类方法的分类性能,而且与bagging和boosting等传统组合方法相比,在大部分数据集上都具有更高的分类准确率。  相似文献   

3.
为提高支持向量机(SVM)集成的训练速度,提出一种基于凸壳算法的SVM集成方法,得到训练集各类数据的壳向量,将其作为基分类器的训练集,并采用Bagging策略集成各个SVM。在训练过程中,通过抛弃性能较差的基分类器,进一步提高集成分类精度。将该方法用于3组数据,实验结果表明,SVM集成的训练和分类速度平均分别提高了266%和25%。  相似文献   

4.
针对网络行为数据不均衡的问题,从数据均衡化和集成学习两个角度出发,提出一种基于动态抽样概率的集成学习算法。依据抽样概分布对多数类样本进行重采样,相比随机抽样,能更准确地加大对错分样本的学习。在更新样本抽样概率时,依据本轮迭代之前所得分类器的集成测试分类效果,而不是只依据本轮迭代所得分类器的分类效果。用7组UCI数据集和KDDCUP数据集来评估算法在不均衡数据集下的分类性能,实验结果显示,算法在不均衡数据集上的分类性能都有相应的提高。  相似文献   

5.
不平衡数据的集成分类算法综述   总被引:1,自引:0,他引:1  
集成学习是通过集成多个基分类器共同决策的机器学习技术,通过不同的样本集训练有差异的基分类器,得到的集成分类器可以有效地提高学习效果。在基分类器的训练过程中,可以通过代价敏感技术和数据采样实现不平衡数据的处理。由于集成学习在不平衡数据分类的优势,针对不平衡数据的集成分类算法得到广泛研究。详细分析了不平衡数据集成分类算法的研究现状,比较了现有算法的差异和各自存在的优点及问题,提出和分析了有待进一步研究的问题。  相似文献   

6.
针对大规模数据分类中训练集分解导致的分类器泛化能力下降问题,提出基于训练集平行分割的集成学习算法.它采用多簇平行超平面对训练集实施多次划分,在各次划分的训练集上采用一种模块化支持向量机网络算法训练基分类器.测试时采用多数投票法对各个基分类器的输出进行集成.在3个大规模问题上的实验表明:在不增加训练时间和测试时间的条件下,集成学习在保持分类器偏置基本不变的同时有效减少了分类器的方差,从而有效降低了由于训练集分割导致的分类器泛化能力下降.  相似文献   

7.
受级联结构的启示,提出了一种针对不平衡数据集分类的新方法,基于级联结构的Bagging分类方法。该方法通过在每一级剔除一部分多数类样本的方式使数据集逐步趋于平衡,并应用欠取样技术得到训练集,用Bagging算法训练分类器,最后把每一级训练到的分类器集成为一个新的分类器。在10个UCI数据集上的实验结果表明,该方法在查全率和F-value值上优于Bagging和AdaBoost。  相似文献   

8.
陈全  赵文辉  李洁  江雨燕 《微机发展》2010,(2):87-89,94
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

9.
为使用正例与未标注数据训练分类器(positive and unlabeled learning,PU learning),提出基于随机森林的PU学习算法。对POSC4.5算法进行扩展,在其生成决策树的过程中加入随机特征选择;在训练阶段,使用有放回抽样技术对PU数据集抽样,生成多个不同的PU训练集,并以其训练扩展后的POSC4.5算法,构造多棵决策树;在分类阶段,采用多数投票策略集成各决策树输出。在UCI数据集上的实验结果表明,该算法的分类性能优于偏置支持向量机算法、POS4.5算法和基于装袋技术的POSC4.5算法。  相似文献   

10.
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

11.
Currently, web spamming is a serious problem for search engines. It not only degrades the quality of search results by intentionally boosting undesirable web pages to users, but also causes the search engine to waste a significant amount of computational and storage resources in manipulating useless information. In this paper, we present a novel ensemble classifier for web spam detection which combines the clonal selection algorithm for feature selection and under-sampling for data balancing. This web spam detection system is called USCS. The USCS ensemble classifiers can automatically sample and select sub-classifiers. First, the system will convert the imbalanced training dataset into several balanced datasets using the under-sampling method. Second, the system will automatically select several optimal feature subsets for each sub-classifier using a customized clonal selection algorithm. Third, the system will build several C4.5 decision tree sub-classifiers from these balanced datasets based on its specified features. Finally, these sub-classifiers will be used to construct an ensemble decision tree classifier which will be applied to classify the examples in the testing data. Experiments on WEBSPAM-UK2006 dataset on the web spam problem show that our proposed approach, the USCS ensemble web spam classifier, contributes significant classification performance compared to several baseline systems and state-of-the-art approaches.  相似文献   

12.
针对传统单个分类器在不平衡数据上分类效果有限的问题,基于对抗生成网络(GAN)和集成学习方法,提出一种新的针对二类不平衡数据集的分类方法——对抗生成网络-自适应增强-决策树(GAN-AdaBoost-DT)算法。首先,利用GAN训练得到生成模型,生成模型生成少数类样本,降低数据的不平衡性;其次,将生成的少数类样本代入自适应增强(AdaBoost)模型框架,更改权重,改进AdaBoost模型,提升以决策树(DT)为基分类器的AdaBoost模型的分类性能。使用受测者工作特征曲线下面积(AUC)作为分类评价指标,在信用卡诈骗数据集上的实验分析表明,该算法与合成少数类样本集成学习相比,准确率提高了4.5%,受测者工作特征曲线下面积提高了6.5%;对比改进的合成少数类样本集成学习,准确率提高了4.9%,AUC值提高了5.9%;对比随机欠采样集成学习,准确率提高了4.5%,受测者工作特征曲线下面积提高了5.4%。在UCI和KEEL的其他数据集上的实验结果表明,该算法在不平衡二分类问题上能提高总体的准确率,优化分类器性能。  相似文献   

13.
针对垃圾网页检测过程中轻微的不平衡分类问题,提出三种随机欠采样集成分类器算法,分别为一次不放回随机欠采样(RUS-once)、多次不放回随机欠采样(RUS-multiple)和有放回随机欠采样(RUS-replacement)算法。首先使用其中一种随机欠采样技术将训练样本集转换成平衡样本集,然后对每个平衡样本集使用分类回归树(CART)分类器算法进行分类,最后采用简单投票法构建集成分类器对测试样本进行分类。实验表明,三种随机欠采样集成分类器均取得了良好的分类效果,其中RUS-multiple和RUS-replacement比RUS-once的分类效果更好。与CART及其Bagging和Adaboost集成分类器相比,在WEBSPAM UK-2006数据集上,RUS-multiple和RUS-replacement方法的AUC指标值提高了10%左右,在WEBSPAM UK-2007数据集上,提高了25%左右;与其他最优研究结果相比,RUS-multiple和RUS-replacement方法在AUC指标上能达到最优分类结果。  相似文献   

14.
In the class imbalanced learning scenario, traditional machine learning algorithms focusing on optimizing the overall accuracy tend to achieve poor classification performance especially for the minority class in which we are most interested. To solve this problem, many effective approaches have been proposed. Among them, the bagging ensemble methods with integration of the under-sampling techniques have demonstrated better performance than some other ones including the bagging ensemble methods integrated with the over-sampling techniques, the cost-sensitive methods, etc. Although these under-sampling techniques promote the diversity among the generated base classifiers with the help of random partition or sampling for the majority class, they do not take any measure to ensure the individual classification performance, consequently affecting the achievability of better ensemble performance. On the other hand, evolutionary under-sampling EUS as a novel undersampling technique has been successfully applied in searching for the best majority class subset for training a good-performance nearest neighbor classifier. Inspired by EUS, in this paper, we try to introduce it into the under-sampling bagging framework and propose an EUS based bagging ensemble method EUS-Bag by designing a new fitness function considering three factors to make EUS better suited to the framework. With our fitness function, EUS-Bag could generate a set of accurate and diverse base classifiers. To verify the effectiveness of EUS-Bag, we conduct a series of comparison experiments on 22 two-class imbalanced classification problems. Experimental results measured using recall, geometric mean and AUC all demonstrate its superior performance.  相似文献   

15.
为解决垃圾网页检测过程中的不平衡分类和"维数灾难"问题,提出一种基于随机森林(RF)和欠采样集成的二元分类器算法。首先使用欠采样技术将训练样本集大类抽样成多个子样本集,再将其分别与小类样本集合并构成多个平衡的子训练样本集;然后基于各个子训练样本集训练出多个随机森林分类器;最后用多个随机森林分类器对测试样本集进行分类,采用投票法确定测试样本的最终所属类别。在WEBSPAM UK-2006数据集上的实验表明,该集成分类器算法应用于垃圾网页检测比随机森林算法及其Bagging和Adaboost集成分类器算法效果更好,准确率、F1测度、ROC曲线下面积(AUC)等指标提高至少14%,13%和11%。与Web spam challenge 2007 优胜团队的竞赛结果相比,该集成分类器算法在F1测度上提高至少1%,在AUC上达到最优结果。  相似文献   

16.
Dynamic weighting ensemble classifiers based on cross-validation   总被引:1,自引:1,他引:0  
Ensemble of classifiers constitutes one of the main current directions in machine learning and data mining. It is accepted that the ensemble methods can be divided into static and dynamic ones. Dynamic ensemble methods explore the use of different classifiers for different samples and therefore may get better generalization ability than static ensemble methods. However, for most of dynamic approaches based on KNN rule, additional part of training samples should be taken out for estimating “local classification performance” of each base classifier. When the number of training samples is not sufficient enough, it would lead to the lower accuracy of the training model and the unreliableness for estimating local performances of base classifiers, so further hurt the integrated performance. This paper presents a new dynamic ensemble model that introduces cross-validation technique in the process of local performances’ evaluation and then dynamically assigns a weight to each component classifier. Experimental results with 10 UCI data sets demonstrate that when the size of training set is not large enough, the proposed method can achieve better performances compared with some dynamic ensemble methods as well as some classical static ensemble approaches.  相似文献   

17.
结构化集成学习垃圾邮件过滤   总被引:4,自引:0,他引:4  
为了解决垃圾邮件过滤算法低计算复杂度与高分类准确率之间的矛盾,在多域学习框架下提出一种结构化集成学习思想,它根据文档结构组合多个基分类器的结果以追求更高分类性能.采用邮件文档的字符串特征生成多个轻量基分类器,并采用字符串-频率索引存储标注数据,使得每次更新和查询的时间开销是常数量级.根据邮件文档的多域结构特性,提出历史域分类器效力线性组合权和当前域文档分类能力线性组合权.综合考虑历史域分类器效力和当前域文档分类能力,还提出一种能够提高整体分类准确率的综合线性组合权.在TREC立即全反馈垃圾邮件过滤任务上的实验结果表明:基于综合线性组合权的结构化集成学习方法能够在较短的时间(47.24min)内完成过滤任务,整体性能1-ROCA达到参加TREC2007评测的最优过滤器性能(0.005 5).  相似文献   

18.
Rotation Forest, an effective ensemble classifier generation technique, works by using principal component analysis (PCA) to rotate the original feature axes so that different training sets for learning base classifiers can be formed. This paper presents a variant of Rotation Forest, which can be viewed as a combination of Bagging and Rotation Forest. Bagging is used here to inject more randomness into Rotation Forest in order to increase the diversity among the ensemble membership. The experiments conducted with 33 benchmark classification data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that the proposed method generally produces ensemble classifiers with lower error than Bagging, AdaBoost and Rotation Forest. The bias–variance analysis of error performance shows that the proposed method improves the prediction error of a single classifier by reducing much more variance term than the other considered ensemble procedures. Furthermore, the results computed on the data sets with artificial classification noise indicate that the new method is more robust to noise and kappa-error diagrams are employed to investigate the diversity–accuracy patterns of the ensemble classifiers.  相似文献   

19.
The ensemble method is a powerful data mining paradigm, which builds a classification model by integrating multiple diversified component learners. Bagging is one of the most successful ensemble methods. It is made of bootstrap-inspired classifiers and uses these classifiers to get an aggregated classifier. However, in bagging, bootstrapped training sets become more and more similar as redundancy is increasing. Besides redundancy, any training set is usually subject to noise. Moreover, the training set might be imbalanced. Thus, each training instance has a different impact on the learning process. This paper explores some properties of the ensemble margin and its use in improving the performance of bagging. We introduce a new approach to measure the importance of training data in learning, based on the margin theory. Then, a new bagging method concentrating on critical instances is proposed. This method is more accurate than bagging and more robust than boosting. Compared to bagging, it reduces the bias while generally keeping the same variance. Our findings suggest that (a) examples with low margins tend to be more critical for the classifier performance; (b) examples with higher margins tend to be more redundant; (c) misclassified examples with high margins tend to be noisy examples. Our experimental results on 15 various data sets show that the generalization error of bagging can be reduced up to 2.5% and its resilience to noise strengthened by iteratively removing both typical and noisy training instances, reducing the training set size by up to 75%.  相似文献   

20.
基于聚类融合的不平衡数据分类方法   总被引:2,自引:0,他引:2  
不平衡数据分类问题目前已成为数据挖掘和机器学习的研究热点。文中提出一类基于聚类融合的不平衡数据分类方法,旨在解决传统分类方法对少数类的识别率较低的问题。该方法通过引入“聚类一致性系数”找出处于少数类边界区域和处于多数类中心区域的样本,并分别使用改进的SMOTE过抽样方法和改进的随机欠抽样方法对训练集的少数类和多数类进行不同的处理,以改善不同类数据的平衡度,为分类算法提供更好的训练平台。通过实验对比8种方法在一些公共数据集上的分类性能,结果表明该方法对少数类和多数类均具有较高的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号