首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
陈全  赵文辉  李洁  江雨燕 《微机发展》2010,(2):87-89,94
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

2.
为了平衡集成学习中差异性和准确性的关系并提高学习系统的泛化性能, 提出一种基于AdaBoost 和匹配追踪的选择性集成算法. 其基本思想是将匹配追踪理论融合于AdaBoost 的训练过程中, 利用匹配追踪贪婪迭代的思想来最小化目标函数与基分类器线性组合之间的冗余误差, 并根据冗余误差更新AdaBoost 已训练基分类器的权重, 进而根据权重大小选择集成分类器成员. 在公共数据集上的实验结果表明, 该算法能够获得较高的分类精度.  相似文献   

3.
通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。文中提出一种多层次选择性集成学习算法,即在基分类器中通过多次按权重进行部分选择,形成多个集成分类器,对形成的集成分类器进行再集成,最后通过对个集成分类器多数投票的方式决定算法的输出。针对决策树与神经网络模型在20个标准数据集对集成学习算法Ada—ens进行了实验研究,试验证明基于数据的集成学习算法的性能优于基于特征集的集成学习算法的性能,有更好的分类准确率和泛化性能。  相似文献   

4.
为了平衡集成学习中差异性和准确性的关系并提高学习系统的泛化性能,提出一种基于AdaBoost和匹配追踪的选择性集成算法.其基本思想是将匹配追踪理论融合于AdaBoost的训练过程中,利用匹配追踪贪婪迭代的思想来最小化目标函数与基分类器线性组合之间的冗余误差,并根据冗余误差更新AdaBoost已训练基分类器的权重,进而根据权重大小选择集成分类器成员.在公共数据集上的实验结果表明,该算法能够获得较高的分类精度.  相似文献   

5.
杨菊  袁玉龙  于化龙 《计算机科学》2016,43(10):266-271
针对现有极限学习机集成学习算法分类精度低、泛化能力差等缺点,提出了一种基于蚁群优化思想的极限学习机选择性集成学习算法。该算法首先通过随机分配隐层输入权重和偏置的方法生成大量差异的极限学习机分类器,然后利用一个二叉蚁群优化搜索算法迭代地搜寻最优分类器组合,最终使用该组合分类测试样本。通过12个标准数据集对该算法进行了测试,该算法在9个数据集上获得了最优结果,在另3个数据集上获得了次优结果。采用该算法可显著提高分类精度与泛化性能。  相似文献   

6.
针对多分类器集成方法产生的流量分类器在泛化能力方面的局限性,提出一种选择性集成网络流量分类框架,以满足流量分类对分类器高效的需求。基于此框架,提出一种多分类器选择性集成的网络流量分类方法 MCSE(Multiple Classifiers Selective Ensemble network traffic classification method),解决多分类器的选取问题。该方法首先利用半监督学习技术提升基分类器的精度,然后改进不一致性度量方法对分类器差异性的度量策略,降低多分类器集成方法实现网络流量分类的复杂性,有效减少选择最优分类器的计算开销。实验表明,与Bagging算法和GASEN算法相比,MCSE方法能更充分利用基分类器间的互补性,具有更高效的流量分类性能。  相似文献   

7.
一种挖掘概念漂移数据流的选择性集成算法   总被引:1,自引:0,他引:1  
提出一种挖掘概念漂移数据流的选择性集成学习算法。该算法根据各基分类器在验证集上的输出结果向量方向与参考向量方向之间的偏离程度,选择参与集成的基分类器。分别在具有突发性和渐进性概念漂移的人造数据集SEA和Hyperplane上进行实验分析。实验结果表明,这种基分类器选择方法大幅度提高了集成算法在处理概念漂移数据流时的分类准确性。使用error-ambiguity分解对算法构建的naive Bayes集成在解决分类问题时的性能进行了分析。实验结果表明,算法成功的主要原因是它能显著降低平均泛化误差。  相似文献   

8.
针对大规模数据分类中训练集分解导致的分类器泛化能力下降问题,提出基于训练集平行分割的集成学习算法.它采用多簇平行超平面对训练集实施多次划分,在各次划分的训练集上采用一种模块化支持向量机网络算法训练基分类器.测试时采用多数投票法对各个基分类器的输出进行集成.在3个大规模问题上的实验表明:在不增加训练时间和测试时间的条件下,集成学习在保持分类器偏置基本不变的同时有效减少了分类器的方差,从而有效降低了由于训练集分割导致的分类器泛化能力下降.  相似文献   

9.
为了进一步提高复杂干扰环境下对海雷达目标识别的泛化能力,提出基于k-medoids聚类和随机参考分类器(RRC)的动态选择集成算法(KMRRC).主要利用重采样技术生成多个基分类器,然后基于成对多样性度量准则将基分类器划分为多个簇,并基于校验数据集为每个基分类器构建相应的RRC模型,最后利用RRC从各个簇中动态选择竞争力最强的部分基分类器进行集成决策.通过寻优实验确定KMRRC的参数设置,随后利用Java调用Weka API在自建的目标全极化高分辨距离像(HRRP)样本库及17个UCI数据集上进行KMRRC与常用的9种集成算法和基分类算法的对比实验,并进一步研究多样性度量方法的选取对KMRRC性能的影响.实验验证文中算法在对海雷达目标识别领域的有效性.  相似文献   

10.
选择性集成学习是为解决同一个问题而训练多个基分类器,并依据某种规则选取部分基分类器的结果进行整合的学习算法。通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。提出了一种多层次选择性集成学习算法Ada_ens。试验结果表明,Ada_ens具有更好的学习效果和泛化性能。  相似文献   

11.
Ensemble pruning deals with the reduction of base classifiers prior to combination in order to improve generalization and prediction efficiency. Existing ensemble pruning algorithms require much pruning time. This paper presents a fast pruning approach: pattern mining based ensemble pruning (PMEP). In this algorithm, the prediction results of all base classifiers are organized as a transaction database, and FP-Tree structure is used to compact the prediction results. Then a greedy pattern mining method is explored to find the ensemble of size k. After obtaining the ensembles of all possible sizes, the one with the best accuracy is outputted. Compared with Bagging, GASEN, and Forward Selection, experimental results show that PMEP achieves the best prediction accuracy and keeps the size of the final ensemble small, more importantly, its pruning time is much less than other ensemble pruning algorithms.  相似文献   

12.
相比于集成学习,集成剪枝方法是在多个分类器中搜索最优子集从而改善分类器的泛化性能,简化集成过程。帕累托集成剪枝方法同时考虑了分类器的精准度及集成规模两个方面,并将二者均作为优化的目标。然而帕累托集成剪枝算法只考虑了基分类器的精准度与集成规模,忽视了分类器之间的差异性,从而导致了分类器之间的相似度比较大。本文提出了融入差异性的帕累托集成剪枝算法,该算法将分类器的差异性与精准度综合为第1个优化目标,将集成规模作为第2个优化目标,从而实现多目标优化。实验表明,当该改进的集成剪枝算法与帕累托集成剪枝算法在集成规模相当的前提下,由于差异性的融入该改进算法能够获得较好的性能。  相似文献   

13.
Diversity among individual classifiers is widely recognized to be a key factor to successful ensemble selection, while the ultimate goal of ensemble pruning is to improve its predictive accuracy. Diversity and accuracy are two important properties of an ensemble. Existing ensemble pruning methods always consider diversity and accuracy separately. However, in contrast, the two closely interrelate with each other, and should be considered simultaneously. Accordingly, three new measures, i.e., Simultaneous Diversity & Accuracy, Diversity-Focused-Two and Accuracy-Reinforcement, are developed for pruning the ensemble by greedy algorithm. The motivation for Simultaneous Diversity & Accuracy is to consider the difference between the subensemble and the candidate classifier, and simultaneously, to consider the accuracy of both of them. With Simultaneous Diversity & Accuracy, those difficult samples are not given up so as to further improve the generalization performance of the ensemble. The inspiration of devising Diversity-Focused-Two stems from the cognition that ensemble diversity attaches more importance to the difference among the classifiers in an ensemble. Finally, the proposal of Accuracy-Reinforcement reinforces the concern about ensemble accuracy. Extensive experiments verified the effectiveness and efficiency of the proposed three pruning measures. Through the investigation of this work, it is found that by considering diversity and accuracy simultaneously for ensemble pruning, well-performed selective ensemble with superior generalization capability can be acquired, which is the scientific value of this paper.  相似文献   

14.
Ensemble systems improve the generalization of single classifiers by aggregating the prediction of a set of base classifiers. Assessing classification reliability (posterior probability) is crucial in a number of applications, such as biomedical and diagnosis applications, where the cost of a misclassified input vector can be unacceptable high. Available methods are limited to either calibrate the posterior probability on an aggregated decision value or obtain a posterior probability for each base classifier and aggregate the result. We propose a method that takes advantage of the distribution of the decision values from the base classifiers to summarize a statistic which is subsequently used to generate the posterior probability. Three approaches are considered to fit the probabilistic output to the statistic: the standard Gaussian CDF, isotonic regression, and linear logistic. Even though this study focuses on a bagged support vector machine ensemble (Z ‐bag), our approach is not limited by the aggregation method selected, the choice of base classifiers, nor the statistic used. Performance is assessed on one artificial and 12 real‐world data sets from the UCI Machine Learning Repository. Our approach achieves comparable or better generalization on accuracy and posterior estimation to existing ensemble calibration methods although lowering computational cost.  相似文献   

15.
烧结终点位置(BTP)是烧结过程至关重要的参数, 直接决定着最终烧结矿的质量. 由于BTP难以直接在线 检测, 因此, 通过智能学习建模来实现BTP的在线预测并在此基础上进行操作参数调节对提高烧结矿质量具有重要 意义. 针对这一实际工程问题, 首先提出一种基于遗传优化的Wrapper特征选择方法, 可选取使后续预测建模性能最 优的特征组合; 在此基础上, 为了解决单一学习器容易过拟合的问题, 提出了基于随机权神经网络(RVFLNs)的稀疏 表示剪枝(SRP)集成建模算法, 即SRP-ERVFLNs算法. 所提算法采用建模速度快、泛化性能好的RVFLNs 作为个体 基学习器, 采用对基学习器基函数与隐层节点数等参数进行扰动的方式来增加集成学习子模型间的差异性; 同时, 为了进一步提高集成模型的泛化性能与计算效率, 引入稀疏表示剪枝算法, 实现对集成模型的高效剪枝; 最后, 将所 提算法用于烧结过程BTP的预测建模. 工业数据实验表明, 所提方法相比于其他方法具有更好的预测精度、泛化性 能和计算效率.  相似文献   

16.
Independent component analysis (ICA) has been widely used to tackle the microarray dataset classification problem, but there still exists an unsolved problem that the independent component (IC) sets may not be reproducible after different ICA transformations. Inspired by the idea of ensemble feature selection, we design an ICA based ensemble learning system to fully utilize the difference among different IC sets. In this system, some IC sets are generated by different ICA transformations firstly. A multi-objective genetic algorithm (MOGA) is designed to select different biologically significant IC subsets from these IC sets, which are then applied to build base classifiers. Three schemes are used to fuse these base classifiers. The first fusion scheme is to combine all individuals in the final generation of the MOGA. In addition, in the evolution, we design a global-recording technique to record the best IC subsets of each IC set in a global-recording list. Then the IC subsets in the list are deployed to build base classifier so as to implement the second fusion scheme. Furthermore, by pruning about half of less accurate base classifiers obtained by the second scheme, a compact and more accurate ensemble system is built, which is regarded as the third fusion scheme. Three microarray datasets are used to test the ensemble systems, and the corresponding results demonstrate that these ensemble schemes can further improve the performance of the ICA based classification model, and the third fusion scheme leads to the most accurate ensemble system with the smallest ensemble size.  相似文献   

17.
将集成学习的思想引入到增量学习之中可以显著提升学习效果,近年关于集成式增量学习的研究大多采用加权投票的方式将多个同质分类器进行结合,并没有很好地解决增量学习中的稳定-可塑性难题。针对此提出了一种异构分类器集成增量学习算法。该算法在训练过程中,为使模型更具稳定性,用新数据训练多个基分类器加入到异构的集成模型之中,同时采用局部敏感哈希表保存数据梗概以备待测样本近邻的查找;为了适应不断变化的数据,还会用新获得的数据更新集成模型中基分类器的投票权重;对待测样本进行类别预测时,以局部敏感哈希表中与待测样本相似的数据作为桥梁,计算基分类器针对该待测样本的动态权重,结合多个基分类器的投票权重和动态权重判定待测样本所属类别。通过对比实验,证明了该增量算法有比较高的稳定性和泛化能力。  相似文献   

18.
为了提高分类器集成性能,提出了一种基于聚类算法与排序修剪结合的分类器集成方法。首先将混淆矩阵作为量化基分类器间差异度的工具,通过聚类将分类器划分为若干子集;然后提出一种排序修剪算法,以距离聚类中心最近的分类器为起点,根据分类器的距离对差异度矩阵动态加权,以加权差异度作为排序标准对子集中的分类器进行按比例修剪;最后使用投票法对选出的基分类器进行集成。同时与多种集成方法在UCI数据库中的10组数据集上进行对比与分析,实验结果表明基于聚类与排序修剪的分类器选择方法有效提升了集成系统的分类能力。  相似文献   

19.
Predicting future stock index price movement has always been a fascinating research area both for the investors who wish to yield a profit by trading stocks and for the researchers who attempt to expose the buried information from the complex stock market time series data. This prediction problem can be addressed as a binary classification problem with two class labels, one for the increasing movement and other for the decreasing movement. In literature, a wide range of classifiers has been tested for this application. As the performance of individual classifier varies for a diverse dataset with respect to different performance measures, it is impractical to acknowledge a specific classifier to be the best one. Hence, designing an efficient classifier ensemble instead of an individual classifier is fetching increasing attention from many researchers. Again selection of base classifiers and deciding their preferences in ensemble with respect to a variety of performance criteria can be considered as a Multi Criteria Decision Making (MCDM) problem. In this paper, an integrated TOPSIS Crow Search based weighted voting classifier ensemble is proposed for stock index price movement prediction. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), one of the popular MCDM techniques, is suggested for ranking and selecting a set of base classifiers for the ensemble whereas the weights of the classifiers used in the ensemble are tuned by the Crow Search method. The proposed ensemble model is validated for prediction of stock index price over the historical prices of BSE SENSEX, S&P500 and NIFTY 50 stock indices. The model has shown better performance compared to individual classifiers and other ensemble models such as majority voting, weighted voting, differential evolution and particle swarm optimization based classifier ensemble.  相似文献   

20.
Ensemble pruning deals with the selection of base learners prior to combination in order to improve prediction accuracy and efficiency. In the ensemble literature, it has been pointed out that in order for an ensemble classifier to achieve higher prediction accuracy, it is critical for the ensemble classifier to consist of accurate classifiers which at the same time diverse as much as possible. In this paper, a novel ensemble pruning method, called PL-bagging, is proposed. In order to attain the balance between diversity and accuracy of base learners, PL-bagging employs positive Lasso to assign weights to base learners in the combination step. Simulation studies and theoretical investigation showed that PL-bagging filters out redundant base learners while it assigns higher weights to more accurate base learners. Such improved weighting scheme of PL-bagging further results in higher classification accuracy and the improvement becomes even more significant as the ensemble size increases. The performance of PL-bagging was compared with state-of-the-art ensemble pruning methods for aggregation of bootstrapped base learners using 22 real and 4 synthetic datasets. The results indicate that PL-bagging significantly outperforms state-of-the-art ensemble pruning methods such as Boosting-based pruning and Trimmed bagging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号