首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a base learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions made by the base algorithm. This general approach has been studied previously by Ali and Pazzani and by Dietterich and Kong. This paper compares the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5. The experiments show that in situations with little or no classification noise, randomization is competitive with (and perhaps slightly superior to) bagging but not as accurate as boosting. In situations with substantial classification noise, bagging is much better than boosting, and sometimes better than randomization.  相似文献   

2.
基于Bagging的组合k-NN预测模型与方法   总被引:1,自引:0,他引:1  
k-近邻方法基于单一k值预测,无法兼顾不同实例可能存在的特征差异,总体预测精度难以保证.针对该问题,提出了一种基于Bagging的组合k-NN预测模型,并在此基础上实现了具有属性选择的Bgk-NN预测方法.该方法通过训练建立个性化预测模型集合,各模型独立生成未知实例预测值,并以各预测值的中位数作为组合预测结果.Bgk-NN预测可适用于包含离散值属性及连续值属性的各种类型数据集.标准数据集上的实验表明,Bgk-NN预测精度较之传统k-NN方法有了明显提高.  相似文献   

3.
集成学习算法的思想就是集成多个学习器,并组合它们的预测结果,以形成最终的结论。典型的学习模型组合方法有投票法,专家混合方法,堆叠泛化法与级联法,但这些方法的性能都有待进一步提高。提出了一种新颖的集成学习算法--增强的集成学习算法(ReinforcedEnsemble)。ReinforcedEnsemble集成算法由两大部分组成:ReinforcedEnsemble特征提取算法与ReinforcedEnsemble基分类器。通过实验,将ReinforcedEnsemble算法与其他集成学习算法进行了性能比较。实验结果表明,所提出的算法在多项指标上均达到最优。  相似文献   

4.
Out-of-bag样本的应用研究   总被引:3,自引:0,他引:3  
张春霞  郭高 《软件》2011,(3):1-4
Bagging集成通过组合不稳定的基分类器在很大程度上降低"弱"学习算法的分类误差,Out-of-bag样本是Bagging集成的自然产物。目前,Out-of-bag样本在估计Bagging集成的泛化误差、构建相关集成分类器等方面得到了广泛应用。文章对Out-of-bag样本的应用进行了综述,阐述了对其进行研究的主要内容和特点,并对它在将来可能的研究方向进行了讨论。  相似文献   

5.
A comparison of decision tree ensemble creation techniques   总被引:3,自引:0,他引:3  
We experimentally evaluate bagging and seven other randomization-based approaches to creating an ensemble of decision tree classifiers. Statistical tests were performed on experimental results from 57 publicly available data sets. When cross-validation comparisons were tested for statistical significance, the best method was statistically more accurate than bagging on only eight of the 57 data sets. Alternatively, examining the average ranks of the algorithms across the group of data sets, we find that boosting, random forests, and randomized trees are statistically significantly better than bagging. Because our results suggest that using an appropriate ensemble size is important, we introduce an algorithm that decides when a sufficient number of classifiers has been created for an ensemble. Our algorithm uses the out-of-bag error estimate, and is shown to result in an accurate ensemble for those methods that incorporate bagging into the construction of the ensemble  相似文献   

6.
从多个弱分类器重构出强分类器的集成学习方法是机器学习领域的重要研究方向之一。尽管已有多种多样性基本分类器的生成方法被提出,但这些方法的鲁棒性仍有待提高。递减样本集成学习算法综合了目前最为流行的boosting与bagging算法的学习思想,通过不断移除训练集中置信度较高的样本,使训练集空间依次递减,使得某些被低估的样本在后续的分类器中得到充分训练。该策略形成一系列递减的训练子集,因而也生成一系列多样性的基本分类器。类似于boosting与bagging算法,递减样本集成学习方法采用投票策略对基本分类器进行整合。通过严格的十折叠交叉检验,在8个UCI数据集与7种基本分类器上的测试表明,递减样本集成学习算法总体上要优于boosting与bagging算法。  相似文献   

7.
Ensemble learning is attracting much attention from pattern recognition and machine learning domains for good generalization. Both theoretical and experimental researches show that combining a set of accurate and diverse classifiers will lead to a powerful classification system. An algorithm, called FS-PP-EROS, for selective ensemble of rough subspaces is proposed in this paper. Rough set-based attribute reduction is introduced to generate a set of reducts, and then each reduct is used to train a base classifier. We introduce an accuracy-guided forward search and post-pruning strategy to select part of the base classifiers for constructing an efficient and effective ensemble system. The experiments show that classification accuracies of ensemble systems with accuracy-guided forward search strategy will increase at first, arrive at a maximal value, then decrease in sequentially adding the base classifiers. We delete the base classifiers added after the maximal accuracy. The experimental results show that the proposed ensemble systems outperform bagging and random subspace methods in terms of accuracy and size of ensemble systems. FS-PP-EROS can keep or improve the classification accuracy with very few base classifiers, which leads to a powerful and compact classification system.  相似文献   

8.
N-gram字符序列能有效捕捉文本中作者的个体风格信息,但其特征空间稀疏度高,且存在较多噪音特征。针对该问题,提出一种基于半随机特征采样的中文书写纹识别算法。该算法首先采用一种离散度准则为每个作者选取一定粒度的个体特征集,然后将个体特征集以一种半随机选择机制划分成多个等维度的特征子空间,并基于每个子空间训练相应的基分类器,最后采取多数投票法的融合策略构造集成分类模型。在中文真实数据集上与基于随机子空间和Bagging算法的集成分类器进行了对比试验,结果表明,该算法在正确率和差异度方面优于随机子空间和Baggrog算法,并且取得了比单分类模型更好的识别性能。  相似文献   

9.
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the ‘minimum-sufficient ensemble’ and bagging at the ensemble level. It adopts an ‘over-generation and selection’ strategy and aims to achieve a good bias–variance trade-off. In the training phase, MSEBAG first searches for the ‘minimum-sufficient ensemble’, which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the ‘minimum-sufficient ensemble’, a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.  相似文献   

10.
针对基于约束得分的特征选择容易受成对约束的组成和基数影响的问题, 提出了一种基于约束得分的动态集成选择算法(dynamic ensemble selection based on bagging constraint score, BCS-DES)。该算法将bagging约束得分(bagging constraint score, BCS)引入动态集成选择算法, 通过将样本空间划分为不同的区域, 使用多种群并行遗传算法为不同测试样本选择局部最优的分类集成, 达到提高分类精度的目的。在UCI实验数据集上进行的实验表明, BCS-DES算法较现有的特征选择算法受成对约束组成和基数影响更小, 效果更好。  相似文献   

11.
Analysis of scientific data requires accurate regressor algorithms to decrease prediction errors. Lots of machine learning algorithms, that is, neural networks, rule‐based algorithms, regression trees and some kinds of lazy learners, are used to realize this need. In recent years, different ensemble regression strategies were improved to obtain enhanced predictors with lower forecasting errors. Ensemble algorithms combine good models that make errors in different parts of analyzed data. There are mainly two approaches in ensemble regression algorithm generation; boosting and bagging. The aim of this article is to evaluate a boosting‐based ensemble approach, forward stage‐wise additive modelling (FSAM), to improve some widely used base regressors’ prediction ability. We used 10 regression algorithms in four different types to make predictions on 10 diverse data from different scientific areas and we compared the experimental results in terms of correlation coefficient, mean absolute error, and root mean squared error metrics. Furthermore, we made use of scatter plots to demonstrate the effect of ensemble modelling on the prediction accuracies of evaluated algorithms. We empirically obtained that in general FSAM enhances the accuracies of base regressors or it at least maintains the base regressor performance.  相似文献   

12.
Several pruning strategies that can be used to reduce the size and increase the accuracy of bagging ensembles are analyzed. These heuristics select subsets of complementary classifiers that, when combined, can perform better than the whole ensemble. The pruning methods investigated are based on modifying the order of aggregation of classifiers in the ensemble. In the original bagging algorithm, the order of aggregation is left unspecified. When this order is random, the generalization error typically decreases as the number of classifiers in the ensemble increases. If an appropriate ordering for the aggregation process is devised, the generalization error reaches a minimum at intermediate numbers of classifiers. This minimum lies below the asymptotic error of bagging. Pruned ensembles are obtained by retaining a fraction of the classifiers in the ordered ensemble. The performance of these pruned ensembles is evaluated in several benchmark classification tasks under different training conditions. The results of this empirical investigation show that ordered aggregation can be used for the efficient generation of pruned ensembles that are competitive, in terms of performance and robustness of classification, with computationally more costly methods that directly select optimal or near-optimal subensembles.  相似文献   

13.
Ensemble pruning deals with the selection of base learners prior to combination in order to improve prediction accuracy and efficiency. In the ensemble literature, it has been pointed out that in order for an ensemble classifier to achieve higher prediction accuracy, it is critical for the ensemble classifier to consist of accurate classifiers which at the same time diverse as much as possible. In this paper, a novel ensemble pruning method, called PL-bagging, is proposed. In order to attain the balance between diversity and accuracy of base learners, PL-bagging employs positive Lasso to assign weights to base learners in the combination step. Simulation studies and theoretical investigation showed that PL-bagging filters out redundant base learners while it assigns higher weights to more accurate base learners. Such improved weighting scheme of PL-bagging further results in higher classification accuracy and the improvement becomes even more significant as the ensemble size increases. The performance of PL-bagging was compared with state-of-the-art ensemble pruning methods for aggregation of bootstrapped base learners using 22 real and 4 synthetic datasets. The results indicate that PL-bagging significantly outperforms state-of-the-art ensemble pruning methods such as Boosting-based pruning and Trimmed bagging.  相似文献   

14.
由于缺少数据分布、参数和数据类别标记的先验信息,部分基聚类的正确性无法保证,进而影响聚类融合的性能;而且不同基聚类决策对于聚类融合的贡献程度不同,同等对待基聚类决策,将影响聚类融合结果的提升。为解决此问题,提出了基于随机取样的选择性K-means聚类融合算法(RS-KMCE)。该算法中的随机取样策略可以避免基聚类决策选取陷入局部极小,而且依据多样性和正确性定义的综合评价值,有利于算法快速收敛到较优的基聚类子集,提升融合性能。通过2个仿真数据库和4个UCI数据库的实验结果显示:RS-KMCE的聚类性能优于K-means算法、K-means融合算法(KMCE)以及基于Bagging的选择性K-means聚类融合(BA-KMCE)。  相似文献   

15.
Ensemble learning strategies, especially boosting and bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. We investigate the hypothesis that the improvement in accuracy of multistrategy approaches to ensemble learning is due to an increase in the diversity of ensemble members that are formed. In addition, guided by this hypothesis, we develop three new multistrategy ensemble learning techniques. Experimental results in a wide variety of natural domains suggest that these multistrategy ensemble learning techniques are, on average, more accurate than their component ensemble learning techniques.  相似文献   

16.
In the class imbalanced learning scenario, traditional machine learning algorithms focusing on optimizing the overall accuracy tend to achieve poor classification performance especially for the minority class in which we are most interested. To solve this problem, many effective approaches have been proposed. Among them, the bagging ensemble methods with integration of the under-sampling techniques have demonstrated better performance than some other ones including the bagging ensemble methods integrated with the over-sampling techniques, the cost-sensitive methods, etc. Although these under-sampling techniques promote the diversity among the generated base classifiers with the help of random partition or sampling for the majority class, they do not take any measure to ensure the individual classification performance, consequently affecting the achievability of better ensemble performance. On the other hand, evolutionary under-sampling EUS as a novel undersampling technique has been successfully applied in searching for the best majority class subset for training a good-performance nearest neighbor classifier. Inspired by EUS, in this paper, we try to introduce it into the under-sampling bagging framework and propose an EUS based bagging ensemble method EUS-Bag by designing a new fitness function considering three factors to make EUS better suited to the framework. With our fitness function, EUS-Bag could generate a set of accurate and diverse base classifiers. To verify the effectiveness of EUS-Bag, we conduct a series of comparison experiments on 22 two-class imbalanced classification problems. Experimental results measured using recall, geometric mean and AUC all demonstrate its superior performance.  相似文献   

17.
将集成学习的思想引入到增量学习之中可以显著提升学习效果,近年关于集成式增量学习的研究大多采用加权投票的方式将多个同质分类器进行结合,并没有很好地解决增量学习中的稳定-可塑性难题。针对此提出了一种异构分类器集成增量学习算法。该算法在训练过程中,为使模型更具稳定性,用新数据训练多个基分类器加入到异构的集成模型之中,同时采用局部敏感哈希表保存数据梗概以备待测样本近邻的查找;为了适应不断变化的数据,还会用新获得的数据更新集成模型中基分类器的投票权重;对待测样本进行类别预测时,以局部敏感哈希表中与待测样本相似的数据作为桥梁,计算基分类器针对该待测样本的动态权重,结合多个基分类器的投票权重和动态权重判定待测样本所属类别。通过对比实验,证明了该增量算法有比较高的稳定性和泛化能力。  相似文献   

18.
Working as an ensemble method that establishes a committee of classifiers first and then aggregates their outcomes through majority voting, bagging has attracted considerable research interest and been applied in various application domains. It has demonstrated several advantages, but in its present form, bagging has been found to be less accurate than some other ensemble methods. To unlock its power and expand its user base, we propose an approach that improves bagging through the use of multi-algorithm ensembles. In a multi-algorithm ensemble, multiple classification algorithms are employed. Starting from a study of the nature of diversity, we show that compared to using different training sets alone, using heterogeneous algorithms together with different training sets increases diversity in ensembles, and hence we provide a fundamental explanation for research utilizing heterogeneous algorithms. In addition, we partially address the problem of the relationship between diversity and accuracy by providing a non-linear function that describes the relationship between diversity and correlation. Furthermore, after realizing that the bootstrap procedure is the exclusive source of diversity in bagging, we use heterogeneity as another source of diversity and propose an approach utilizing heterogeneous algorithms in bagging. For evaluation, we consider several benchmark data sets from various application domains. The results indicate that, in terms of F1-measure, our approach outperforms most of the other state-of-the-art ensemble methods considered in experiments and, in terms of mean margin, our approach is superior to all the others considered in experiments.  相似文献   

19.
类别不平衡问题广泛存在于现实生活中,多数传统分类器假定类分布平衡或误分类代价相等,因此类别不平衡数据严重影响了传统分类器的分类性能。针对不平衡数据集的分类问题,提出了一种处理不平衡数据的概率阈值Bagging分类方法-PT Bagging。将阈值移动技术与Bagging集成算法结合起来,在训练阶段使用原始分布的训练集进行训练,在预测阶段引入决策阈值移动方法,利用校准的后验概率估计得到对不平衡数据分类的最大化性能测量。实验结果表明,PT Bagging算法具有更好的处理不平衡数据的分类优势。  相似文献   

20.
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号