首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
尹光  朱玉全  陈耿 《计算机工程》2012,38(8):167-169
为提高集成分类器系统的分类性能,提出一种分类器选择集成算法MCC-SCEN。该算法选取基分类器集中具有最大互信息差异性的子集和最大个体分类能力的子集,以确定待扩展分类器集,选择具有较大混合分类能力的基分类器加入到待扩展集中,构成集成系统,进行加权投票并产生结果。实验结果表明,该方法优于经典的AdaBoost和Bagging方法,具有较高的分类准确率。  相似文献   

2.
为了解决在分类器集成过程中分类性能要求高和集成过程复杂等问题,分析常规集成方法的优缺点,研究已有的分类器差异性度量方法,提出了筛选差异性尽可能大的分类器作为基分类器而构建的一个层级式分类器集成系统.构建不同的基分类器,选择准确率较高的备选,分析其差异性,选出差异大的分类器作为系统所需基分类器,构成集成系统.通过在UCI数据集上进行的试验,获得了很好的分类识别效果,验证了这种分类集成系统的优越性.  相似文献   

3.
为解决多分类器融合过程中时间开销大和准确率不高的问题,采用改进的Bagging方法并结合MapReduce技术,提出了一种基于选择性集成的并行多分类器融合方法PMCF-SE。该方法基于MapReduce并行计算架构。在Map阶段,选择分类效果较好的基分类器;在Reduce阶段,从所选的基分类器中选择差异性较大的基分类器,然后采用D-S证据理论融合被选的基分类器。实验结果表明,在执行效率方面,与单机环境相比,集群环境下该方法的执行效率有所提高;在分类准确率方面,与Bagging算法相比,PMCF-SE在不同的基分类器数目下的分类准确率都高于Bagging算法。  相似文献   

4.
针对分类器的构建,在保证基分类器准确率和差异度的基础上,提出了采用差异性度量特征选择的多分类器融合算法(multi-classifier fusion algorithm based on diversity measure for feature selection,MFA-DMFS)。该算法的基本思想是在原始特征集中采用Relief特征评估结果按权值大小选择特征,构造特征子集,通过精调使各特征子集间满足一定的差异性,从而构建最优的基分类器。MFA-DMFS不但能提高基分类器的准确率,而且保持基分类器间的差异,克服差异性和平均准确率之间存在的相互制约,并实现这两方面的平衡。在UCI数据集上与基于Bagging、Boosting算法的多分类器融合系统进行了对比实验,实验结果表明,该算法在准确率和运行速度方面优于Bagging和Boosting算法,此外在图像数据集上的检索实验也取得了较好的分类效果。  相似文献   

5.
针对一些多标签文本分类算法没有考虑文本-术语相关性和准确率不高的问题,提出一种结合旋转森林和AdaBoost分类器的集成多标签文本分类方法。首先,通过旋转森林算法对样本集进行分割,通过特征变换将各样本子集映射到新的特征空间,形成多个具有较大差异性的新样本子集。然后,基于AdaBoost算法,在样本子集中通过多次迭代构建多个AdaBoost基分类器。最后,通过概率平均法融合多个基分类器的决策结果,以此做出最终标签预测。在4个基准数据集上的实验结果表明,该方法在平均精确度、覆盖率、排名损失、汉明损失和1-错误率方面都具有优越的性能。  相似文献   

6.
针对多分类器集成方法产生的流量分类器在泛化能力方面的局限性,提出一种选择性集成网络流量分类框架,以满足流量分类对分类器高效的需求。基于此框架,提出一种多分类器选择性集成的网络流量分类方法 MCSE(Multiple Classifiers Selective Ensemble network traffic classification method),解决多分类器的选取问题。该方法首先利用半监督学习技术提升基分类器的精度,然后改进不一致性度量方法对分类器差异性的度量策略,降低多分类器集成方法实现网络流量分类的复杂性,有效减少选择最优分类器的计算开销。实验表明,与Bagging算法和GASEN算法相比,MCSE方法能更充分利用基分类器间的互补性,具有更高效的流量分类性能。  相似文献   

7.
融合集成方法已经广泛应用在模式识别领域,然而一些基分类器实时性能稳定性较差,导致多分类器融合性能差,针对上述问题本文提出了一种新的基于多分类器的子融合集成分类器系统。该方法考虑在度量层融合层次之上通过对各类基多分类器进行动态选择,票数最多的类别作为融合系统中对特征向量识别的类别,构成一种新的自适应子融合集成分类器方法。实验表明,该方法比传统的分类器以及分类融合方法识别准确率明显更高,具有更好的鲁棒性。  相似文献   

8.
快速多分类器集成算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
研究快速多分类器集成算法。对多分类器集成需选定一定数量的弱分类器,再为每个弱分类器分配一定权重。在选择弱分类器时,通过计算每个弱分类器在全部训练样本集上的分类错误率,对其进行排序,挑选出分类效果最好的若干弱分类器。在多分类器权重分配策略上,提出2种权重分配方法:Biased AdaBoost算法与基于差分演化的多分类器集成算法。在人脸数据库上的实验结果表明,与经典AdaBoost算法相比,该算法能有效降低训练时间,提高识别准确率。  相似文献   

9.
基于全信息矩阵的多分类器集成方法   总被引:12,自引:0,他引:12       下载免费PDF全文
唐春生  金以慧 《软件学报》2003,14(6):1103-1109
自动文本分类是提高信息利用效率和质量的有效方法,而多分类器的有效组合能够得到更高的分类准确率.给出了样本集在多分类器下的全信息矩阵概念,并提出一种权重自适应调整的多分类器集成方法.该方法能够自适应地选择分类器组合及确定分类器权重,并利用分类统计信息指导分类结果的集成判决.通过在标准文本集Reuters-21578上的实验表明:该方法能从查准率和查全率两方面提高文本分类的整体性能,同时表明了该方法的有效性.  相似文献   

10.
为提高决策树的集成分类精度,介绍了一种基于特征变换的旋转森林分类器集成算法,通过对数据属性集的随机分割,并在属性子集上对抽取的子样本数据进行主成分分析,以构造新的样本数据,达到增大基分类器差异性及提高预测准确率的目的。在Weka平台下,分别采用Bagging、AdaBoost及旋转森林算法对剪枝与未剪枝的J48决策树分类算法进行集成的对比试验,以10次10折交叉验证的平均准确率为比较依据。结果表明旋转森林算法的预测精度优于其他两个算法,验证了旋转森林是一种有效的决策树分类器集成算法。  相似文献   

11.
蔡铁  伍星  李烨 《计算机应用》2008,28(8):2091-2093
为构造集成学习中具有差异性的基分类器,提出基于数据离散化的基分类器构造方法,并用于支持向量机集成。该方法采用粗糙集和布尔推理离散化算法处理训练样本集,能有效删除不相关和冗余的属性,提高基分类器的准确性和差异性。实验结果表明,所提方法能取得比传统集成学习算法Bagging和Adaboost更好的性能。  相似文献   

12.
针对集成分类器由于基分类器过弱,需要牺牲大量训练时间才能取得高精度的问题,提出一种基于实例的强分类器快速集成方法——FSE。首先通过基分类器评价方法剔除不合格分类器,再对分类器进行精确度和差异性排序,从而得到一组精度最高、差异性最大的分类器;然后通过FSE集成算法打破已有的样本分布,重新采样使分类器更多地关注难学习的样本,并以此决定各分类器的权重并集成。实验通过与集成分类器Boosting在UCI数据库和真实数据集上进行比对,Boosting构造的集成分类器的识别精度最高分别能达到90.2%和90.4%,而使用FSE方法的集成分类器精度分别能达到95.6%和93.9%;而且两者在达到相同精度时,使用FSE方法的集成分类器分别缩短了75%和80%的训练时间。实验结果表明,FSE集成模型能有效提高识别精度、缩短训练时间。  相似文献   

13.
集成学习被广泛用于提高分类精度, 近年来的研究表明, 通过多模态扰乱策略来构建集成分类器可以进一步提高分类性能. 本文提出了一种基于近似约简与最优采样的集成剪枝算法(EPA_AO). 在EPA_AO中, 我们设计了一种多模态扰乱策略来构建不同的个体分类器. 该扰乱策略可以同时扰乱属性空间和训练集, 从而增加了个体分类器的多样性. 我们利用证据KNN (K-近邻)算法来训练个体分类器, 并在多个UCI数据集上比较了EPA_AO与现有同类型算法的性能. 实验结果表明, EPA_AO是一种有效的集成学习方法.  相似文献   

14.
Credit scoring aims to assess the risk associated with lending to individual consumers. Recently, ensemble classification methodology has become popular in this field. However, most researches utilize random sampling to generate training subsets for constructing the base classifiers. Therefore, their diversity is not guaranteed, which may lead to a degradation of overall classification performance. In this paper, we propose an ensemble classification approach based on supervised clustering for credit scoring. In the proposed approach, supervised clustering is employed to partition the data samples of each class into a number of clusters. Clusters from different classes are then pairwise combined to form a number of training subsets. In each training subset, a specific base classifier is constructed. For a sample whose class label needs to be predicted, the outputs of these base classifiers are combined by weighted voting. The weight associated with a base classifier is determined by its classification performance in the neighborhood of the sample. In the experimental study, two benchmark credit data sets are adopted for performance evaluation, and an industrial case study is conducted. The results show that compared to other ensemble classification methods, the proposed approach is able to generate base classifiers with higher diversity and local accuracy, and improve the accuracy of credit scoring.  相似文献   

15.
为了去除集成学习中的冗余个体,提出了一种基于子图选择个体的分类器集成算法。训练出一批分类器,利用个体以及个体间的差异性构造出一个带权的完全无向图;利用子图方法选择部分差异性大的个体参与集成。通过使用支持向量机作为基学习器,在多个分类数据集上进行了实验研究,并且与常用的集成方法Bagging和Adaboost进行了比较,结果该方法获得了较好的集成效果。  相似文献   

16.
Decreasing the individual error and increasing the diversity among classifiers are two crucial factors for improving ensemble performances. Nevertheless, the “kappa-error” diagram shows that enhancing the diversity is at the expense of reducing individual accuracy. Hence, a new method named Matching Pursuit Optimization Ensemble Classifiers (MPOEC) is proposed in this paper in order to balance the diversity and the individual accuracy. MPOEC method adopts a greedy iterative algorithm of matching pursuit to search for an optimal combination of entire classifiers, and eliminates some similar or poor classifiers by giving zero coefficients. In MPOEC approach, the coefficient of every classifier is gained by minimizing the residual between the target function and the linear combination of the basis functions, especially, when the basis functions are similar, their coefficients will be close to zeros in one iteration of the optimization process, which indicates that obtained coefficients of classifiers are based on the diversity among ensemble individuals. Because some classifiers are given zero coefficients, MPOEC approach may be also considered as a selective classifiers ensemble method. Experimental results show that MPOEC improves the performance compared with other methods. Furthermore, the kappa-error diagrams indicate that the diversity is increased by the proposed method compared with standard ensemble strategies and evolutionary ensemble.  相似文献   

17.
Cost Complexity-Based Pruning of Ensemble Classifiers   总被引:1,自引:0,他引:1  
In this paper we study methods that combine multiple classification models learned over separate data sets. Numerous studies posit that such approaches provide the means to efficiently scale learning to large data sets, while also boosting the accuracy of individual classifiers. These gains, however, come at the expense of an increased demand for run-time system resources. The final ensemble meta-classifier may consist of a large collection of base classifiers that require increased memory resources while also slowing down classification throughput. Here, we describe an algorithm for pruning (i.e., discarding a subset of the available base classifiers) the ensemble meta-classifier as a means to reduce its size while preserving its accuracy and we present a technique for measuring the trade-off between predictive performance and available run-time system resources. The algorithm is independent of the method used initially when computing the meta-classifier. It is based on decision tree pruning methods and relies on the mapping of an arbitrary ensemble meta-classifier to a decision tree model. Through an extensive empirical study on meta-classifiers computed over two real data sets, we illustrate our pruning algorithm to be a robust and competitive approach to discarding classification models without degrading the overall predictive performance of the smaller ensemble computed over those that remain after pruning. Received 30 August 2000 / Revised 7 March 2001 / Accepted in revised form 21 May 2001  相似文献   

18.
入侵检测是网络安全领域中具有挑战性和重要性的任务。现有研究以增加时间消耗和误报率为代价,重点关注如何提高检测率,在实际应用中代价较大。为此,本文提出了一种使用双层异质学习器集成学习策略的入侵检测IDHEL模型。该模型使用概率核主成分分析方法降低数据维度,采用多个异质分类器通过分层十折交叉验证策略进行异常检测,并根据所提出的分类器评估算法筛选出在相关数据上表现最佳的三种分类器,基于概率加权投票的多分类器集成算法进行入侵检测。实验结果表明IDHEL模型在准确率、错误率和时间消耗方面均优于现有主流入侵检测模型。  相似文献   

19.
Combining several classifiers has proved to be an effective machine learning technique. Two concepts clearly influence the performances of an ensemble of classifiers: the diversity between classifiers and the individual accuracies of the classifiers. In this paper we propose an information theoretic framework to establish a link between these quantities. As they appear to be contradictory, we propose an information theoretic score (ITS) that expresses a trade-off between individual accuracy and diversity. This technique can be directly used, for example, for selecting an optimal ensemble in a pool of classifiers. We perform experiments in the context of overproduction and selection of classifiers, showing that the selection based on the ITS outperforms state-of-the-art diversity-based selection techniques.  相似文献   

20.
Feature selection for ensembles has shown to be an effective strategy for ensemble creation due to its ability of producing good subsets of features, which make the classifiers of the ensemble disagree on difficult cases. In this paper we present an ensemble feature selection approach based on a hierarchical multi-objective genetic algorithm. The underpinning paradigm is the “overproduce and choose”. The algorithm operates in two levels. Firstly, it performs feature selection in order to generate a set of classifiers and then it chooses the best team of classifiers. In order to show its robustness, the method is evaluated in two different contexts:supervised and unsupervised feature selection. In the former, we have considered the problem of handwritten digit recognition and used three different feature sets and multi-layer perceptron neural networks as classifiers. In the latter, we took into account the problem of handwritten month word recognition and used three different feature sets and hidden Markov models as classifiers. Experiments and comparisons with classical methods, such as Bagging and Boosting, demonstrated that the proposed methodology brings compelling improvements when classifiers have to work with very low error rates. Comparisons have been done by considering the recognition rates only.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号