首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
为解决不均衡多分类问题,提出了一种基于采样和特征选择的不均衡数据集成分类算法(IDESF)。基分类器的多样性会影响集成算法的分类性能,所以IDESF算法对数据集进行有放回采样+SMOTE的两阶段采样。两阶段采样在保证所得数据集中样本合理性的基础上,增加数据集间的差异性以此隐式地提高基分类器的多样性。两阶段采样同样可以平衡数据分布,防止分类器偏向多数类。在两阶段采样的基础上,IDESF算法引入了数据清洗和特征选择方法,试图进一步提高算法的分类性能。与其他不均衡分类算法在5组不均衡数据集上进行了对比实验,结果表明该算法可以获得较高的AUCarea和G-Mean值,具有较为优异的分类效果。  相似文献   

2.
为了进一步提高复杂干扰环境下对海雷达目标识别的泛化能力,提出基于k-medoids聚类和随机参考分类器(RRC)的动态选择集成算法(KMRRC).主要利用重采样技术生成多个基分类器,然后基于成对多样性度量准则将基分类器划分为多个簇,并基于校验数据集为每个基分类器构建相应的RRC模型,最后利用RRC从各个簇中动态选择竞争力最强的部分基分类器进行集成决策.通过寻优实验确定KMRRC的参数设置,随后利用Java调用Weka API在自建的目标全极化高分辨距离像(HRRP)样本库及17个UCI数据集上进行KMRRC与常用的9种集成算法和基分类算法的对比实验,并进一步研究多样性度量方法的选取对KMRRC性能的影响.实验验证文中算法在对海雷达目标识别领域的有效性.  相似文献   

3.
针对集成学习方法中分类器差异性不足以及已标记样本少的问题,提出了一种新的半监督集成学习算法,将半监督方法引入到集成学习中,利用大量未标记样本的信息来细化每个基分类器,并且构造差异性更大的基分类器,首先通过多视图方法选取合适的未标记样本,并使用多视图方法将大量繁杂的特征属性分类,使用不同的特征降维方法对不同的视图进行降维,便与输入到学习模型中,同时采用相互独立的学习模型来增加集成的多样性。在UCI数据集上的实验结果表明,与使用单视图数据相比,使用多视图数据可以实现更准确的分类,并且与现有的诸如Boosting、三重训练算法比较,使用差异性更高的基学习器以及引入半监督方法能够有效提升集成学习的性能效果。  相似文献   

4.
针对网络流量特征选择过程中监督信息缺乏的问题,提出一种基于成对约束扩展的半监督网络流量特征选择算法。该算法同时考虑少量成对约束和大量无标记样本,利用样本集合间的相关性和自相关性,扩展成对约束集到无标记样本上,产生更多可靠性强的成对约束,以揭示样本空间分布信息。最后,利用扩展的成对约束集进行特征选择。实验证明:与未进行成对约束扩展的算法相比,该算法在少量初始成对约束的情况下能获得更好的分类性能。  相似文献   

5.
高锋  黄海燕 《计算机科学》2017,44(8):225-229
不平衡数据严重影响了传统分类算法的性能,导致少数类的识别率降低。提出一种基于邻域特征的混合抽样技术,该技术根据样本邻域中的类别分布特征来确定采样权重,进而采用混合抽样的方法来获得平衡的数据集;然后采用一种基于局部置信度的动态集成方法,通过分类学习生成基分类器,对于每个检验的样本,根据局部分类精度动态地选择最优的基分类器进行组合。通过UCI标准数据集上的实验表明,该方法能够同时提高不平衡数据中少数类和多数类的分类精度。  相似文献   

6.
王磊 《计算机科学》2009,36(10):234-236
提出两种基于约束投影的支持向量机选择性集成算法。首先利用随机选取的must-link和cannot-link成对约束集确定投影矩阵,将原始训练样本投影到不同的低维空间训练一组基分类器;然后,分别采用遗传优化和最小化偏离度误差两种选择性集成技术对基分类器进行组合。基于UCI数据的实验表明,提出的两种集成算法均能有效提高支持向量机的泛化性能,显著优于Bagging,Boosting,特征Bagging及LoBag等集成算法。  相似文献   

7.
陈全  赵文辉  李洁  江雨燕 《微机发展》2010,(2):87-89,94
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

8.
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

9.
针对AdaBoost。M2算法在解决多类不平衡协议流量的分类问题时存在不足,提出一种适用于因特网协议流量多类不平衡分类的集成学习算法RBWS-ADAM2,本算法在AdaBoost。M2每次迭代过程中,设计了基于权重的随机平衡重采样策略对训练数据进行预处理,该策略利用随机设置采样平衡点的重采样方式来更改多数类和少数类的样本数目占比,以构建多个具有差异性的训练集,并将样本权重作为样本筛选的依据,尽可能保留高权重样本,以加强对此类样本的学习。在国际公开的协议流量数据集上将RBWS-ADAM2算法与其他类似算法进行实验比较表明,相比于其他算法,该算法不仅对部分少数类的F-measure有较大提升,更有效提高了集成分类器的总体G-mean和总体平均F-measure,明显增强了集成分类器的整体性能。  相似文献   

10.
基于模糊聚类的思想提出了一种新的两级集成分类器算法.将数据集用Fuzzy C-Means算法进行聚类,得到每个实例对应于每个类别的模糊隶属度.一级集成根据Bagging算法获得成员分类器,分类器个数为数据集类别数且每个成员分类器对应一个类别标号,这些成员分类器的采样方式是通过其对应类别的模糊隶属度为每个实例加权后进行随机重采样.二级集成是将一级集成产生的针对类别的成员分类器通过动态加权多数投票法来组合,学习到最终的分类结果.该算法称为EWFuzzyBagging,实验结果表明,该算法与Bagging和AdaBoost相比具有更好的健壮性.  相似文献   

11.
差异性是提高分类器集成泛化性能的重要因素。采用熵差异性度量及数据子集法训练基分类器,研究了爬山选择、集成前序选择、集成后序选择以及聚类选择策略选取个体模型的集成学习。实验结果表明,由选择策略选取差异性较大的个体模型,其集成性能表现出较好的优势;从总体角度考虑,爬山选择策略的集成性能优于集成前序选择和集成后序选择的集成性能;另外,由聚类技术选取的集成模型,当集成正确率较稳定时,则模型间的差异性变化较小;簇数也对集成性能与集成模型间的差异性产生一定的影响。  相似文献   

12.
针对基于约束得分的特征选择容易受成对约束的组成和基数影响的问题, 提出了一种基于约束得分的动态集成选择算法(dynamic ensemble selection based on bagging constraint score, BCS-DES)。该算法将bagging约束得分(bagging constraint score, BCS)引入动态集成选择算法, 通过将样本空间划分为不同的区域, 使用多种群并行遗传算法为不同测试样本选择局部最优的分类集成, 达到提高分类精度的目的。在UCI实验数据集上进行的实验表明, BCS-DES算法较现有的特征选择算法受成对约束组成和基数影响更小, 效果更好。  相似文献   

13.
一种基于旋转森林的集成协同训练算法   总被引:1,自引:0,他引:1       下载免费PDF全文
集成协同训练算法(ensemble co-training)是将集成学习(ensemble learning)和协同训练算法(co-training)相结合的半监督学习方法,旋转森林(rotation forest)是利用特征提取来构造基分类器差异性的集成学习方法,在对现有的集成协同训练算法研究基础上,提出了基于旋转森林的协同训练算法——ROFCO,该方法重在利用未标记数据提高基分类器之间的差异性和特征提取效果,使基分类器的泛化误差保持不变或下降的同时,能保持甚至提高基分类器之间的差异性,提高集成效果。实验结果表明该方法能取得较好效果。  相似文献   

14.
Ensemble learning algorithms train multiple component learners and then combine their predictions. In order to generate a strong ensemble, the component learners should be with high accuracy as well as high diversity. A popularly used scheme in generating accurate but diverse component learners is to perturb the training data with resampling methods, such as the bootstrap sampling used in bagging. However, such a scheme is not very effective on local learners such as nearest-neighbor classifiers because a slight change in training data can hardly result in local learners with big differences. In this paper, a new ensemble algorithm named Filtered Attribute Subspace based Bagging with Injected Randomness (FASBIR) is proposed for building ensembles of local learners, which utilizes multimodal perturbation to help generate accurate but diverse component learners. In detail, FASBIR employs the perturbation on the training data with bootstrap sampling, the perturbation on the input attributes with attribute filtering and attribute subspace selection, and the perturbation on the learning parameters with randomly configured distance metrics. A large empirical study shows that FASBIR is effective in building ensembles of nearest-neighbor classifiers, whose performance is better than that of many other ensemble algorithms.  相似文献   

15.
《Information Fusion》2005,6(1):83-98
Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. Ensembles allow us to achieve higher accuracy, which is often not achievable with single models. It was shown theoretically and experimentally that in order for an ensemble to be effective, it should consist of base classifiers that have diversity in their predictions. One technique, which proved to be effective for constructing an ensemble of diverse base classifiers, is the use of different feature subsets, or so-called ensemble feature selection. Many ensemble feature selection strategies incorporate diversity as an objective in the search for the best collection of feature subsets. A number of ways are known to quantify diversity in ensembles of classifiers, and little research has been done about their appropriateness to ensemble feature selection. In this paper, we compare five measures of diversity with regard to their possible use in ensemble feature selection. We conduct experiments on 21 data sets from the UCI machine learning repository, comparing the ensemble accuracy and other characteristics for the ensembles built with ensemble feature selection based on the considered measures of diversity. We consider four search strategies for ensemble feature selection together with the simple random subspacing: genetic search, hill-climbing, and ensemble forward and backward sequential selection. In the experiments, we show that, in some cases, the ensemble feature selection process can be sensitive to the choice of the diversity measure, and that the question of the superiority of a particular measure depends on the context of the use of diversity and on the data being processed. In many cases and on average, the plain disagreement measure is the best. Genetic search, kappa, and dynamic voting with selection form the best combination of a search strategy, diversity measure and integration method.  相似文献   

16.
《Information Fusion》2003,4(2):87-100
A popular method for creating an accurate classifier from a set of training data is to build several classifiers, and then to combine their predictions. The ensembles of simple Bayesian classifiers have traditionally not been a focus of research. One way to generate an ensemble of accurate and diverse simple Bayesian classifiers is to use different feature subsets generated with the random subspace method. In this case, the ensemble consists of multiple classifiers constructed by randomly selecting feature subsets, that is, classifiers constructed in randomly chosen subspaces. In this paper, we present an algorithm for building ensembles of simple Bayesian classifiers in random subspaces. The EFS_SBC algorithm includes a hill-climbing-based refinement cycle, which tries to improve the accuracy and diversity of the base classifiers built on random feature subsets. We conduct a number of experiments on a collection of 21 real-world and synthetic data sets, comparing the EFS_SBC ensembles with the single simple Bayes, and with the boosted simple Bayes. In many cases the EFS_SBC ensembles have higher accuracy than the single simple Bayesian classifier, and than the boosted Bayesian ensemble. We find that the ensembles produced focusing on diversity have lower generalization error, and that the degree of importance of diversity in building the ensembles is different for different data sets. We propose several methods for the integration of simple Bayesian classifiers in the ensembles. In a number of cases the techniques for dynamic integration of classifiers have significantly better classification accuracy than their simple static analogues. We suggest that a reason for that is that the dynamic integration better utilizes the ensemble coverage than the static integration.  相似文献   

17.
This paper presents a method for improved ensemble learning, by treating the optimization of an ensemble of classifiers as a compressed sensing problem. Ensemble learning methods improve the performance of a learned predictor by integrating a weighted combination of multiple predictive models. Ideally, the number of models needed in the ensemble should be minimized, while optimizing the weights associated with each included model. We solve this problem by treating it as an example of the compressed sensing problem, in which a sparse solution must be reconstructed from an under-determined linear system. Compressed sensing techniques are then employed to find an ensemble which is both small and effective. An additional contribution of this paper, is to present a new performance evaluation method (a new pairwise diversity measurement) called the roulette-wheel kappa-error. This method takes into account the different weightings of the classifiers, and also reduces the total number of pairs of classifiers needed in the kappa-error diagram, by selecting pairs through a roulette-wheel selection method according to the weightings of the classifiers. This approach can greatly improve the clarity and informativeness of the kappa-error diagram, especially when the number of classifiers in the ensemble is large. We use 25 different public data sets to evaluate and compare the performance of compressed sensing ensembles using four different sparse reconstruction algorithms, combined with two different classifier learning algorithms and two different training data manipulation techniques. We also give the comparison experiments of our method against another five state-of-the-art pruning methods. These experiments show that our method produces comparable or better accuracy, while being significantly faster than the compared methods.  相似文献   

18.
Constraint Score is a recently proposed method for feature selection by using pairwise constraints which specify whether a pair of instances belongs to the same class or not. It has been shown that the Constraint Score, with only a small amount of pairwise constraints, achieves comparable performance to those fully supervised feature selection methods such as Fisher Score. However, one major disadvantage of the Constraint Score is that its performance is dependent on a good selection on the composition and cardinality of constraint set, which is very challenging in practice. In this work, we address the problem by importing Bagging into Constraint Score and a new method called Bagging Constraint Score (BCS) is proposed. Instead of seeking one appropriate constraint set for single Constraint Score, in BCS we perform multiple Constraint Score, each of which uses a bootstrapped subset of original given constraint set. Diversity analysis on individuals of ensemble shows that resampling pairwise constraints is helpful for simultaneously improving accuracy and diversity of individuals. We conduct extensive experiments on a series of high-dimensional datasets from UCI repository and gene databases, and the experimental results validate the effectiveness of the proposed method.  相似文献   

19.
基于支持向量机集成的故障诊断   总被引:3,自引:2,他引:3  
为提高故障诊断的准确性,提出了一种基于遗传算法的支持向量机集成学习方法,定义了相应的遗传操作算子,并探讨了集成下的分类器的构造策略。对汽轮机转子不平衡故障诊断的仿真实验结果表明,集成学习方法的性能通常优于单个支持向量机,而所提方法性能则优于Bagging与Boosting等传统集成学习方法,获得的集成所包括的分类器数目更少,而且结合多种分类器构造策略可提高分类器的多样性。该方法能容易地推广到神经网络、决策树等其他学习算法。  相似文献   

20.
对数据流分类分析的常用方法是集成学习。为了得到更好的分类效果,给出一种基于堆叠集成的数据流分类分析方法。该方法通过构造一个分类器对基分类器进行集成。实验结果表明,与基于投票或加权投票的集成方法相比,基于堆叠集成方法对概念漂移的快速适应能力以及预测准确率得到了提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号