首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 812 毫秒
1.
Bagging算法在中文文本分类中的应用   总被引:2,自引:1,他引:2       下载免费PDF全文
Bagging算法是目前一种流行的集成学习算法,采用一种改进的Bagging算法Attribute Bagging作为分类算法,通过属性重取样获取多个训练集,以kNN为弱分类器设计一种中文文本分类器。实验结果表明Attribute Bagging算法较Bagging算法有更好的分类精度。  相似文献   

2.
面向中文文本分类的C4.5Bagging算法研究   总被引:2,自引:0,他引:2       下载免费PDF全文
对于中文文本分类问题,提出一种新的Bagging方法。这一方法以决策树C4.5算法为弱分类器,通过实例重取样获取多个训练集,将其结果按照投票规则进行合成,最终得到分类结果。实验证明,这种算法的准确率、查全率、F1值比C4.5、kNN和朴素贝叶斯分类器都高,具有更加优良的性能。  相似文献   

3.
针对经典C4.5决策树算法存在过度拟合和伸缩性差的问题,提出了一种基于Bagging的决策树改进算法,并基于MapReduce模型对改进算法进行了并行化。首先,基于Bagging技术对C4.5算法进行了改进,通过有放回采样得到多个与初始训练集大小相等的新训练集,并在每个训练集上进行训练,得到多个分类器,再根据多数投票规则集成训练结果得到最终的分类器;然后,基于MapReduce模型对改进算法进行了并行化,能够并行化处理训练集、并行选择最佳分割属性和最佳分割点,以及并行生成子节点,实现了基于MapReduce Job工作流的并行决策树改进算法,提高了对大数据集的分析能力。实验结果表明,并行Bagging决策树改进算法具有较高的准确度与敏感度,以及较好的伸缩性和加速比。  相似文献   

4.
为了提高企业信用风险评估准确率,提出了基于PSO-BP集成的企业信用风险评估模型.使用Bagging抽样技术获得足够多不同的训练数据集,用不同的训练集子集训练得到不同的PSO-BP组合成员分类器,最后通过多数投票准则整合不同组合成员分类器的分类结果.分别在包含了国内外公司的详细数据的数据集上证明了模型的有效性.  相似文献   

5.
类别不平衡问题广泛存在于现实生活中,多数传统分类器假定类分布平衡或误分类代价相等,因此类别不平衡数据严重影响了传统分类器的分类性能。针对不平衡数据集的分类问题,提出了一种处理不平衡数据的概率阈值Bagging分类方法-PT Bagging。将阈值移动技术与Bagging集成算法结合起来,在训练阶段使用原始分布的训练集进行训练,在预测阶段引入决策阈值移动方法,利用校准的后验概率估计得到对不平衡数据分类的最大化性能测量。实验结果表明,PT Bagging算法具有更好的处理不平衡数据的分类优势。  相似文献   

6.
沈蕾  石盛平  燕继坤 《计算机工程》2006,32(23):216-217
提出结合单边抽样Bagging与LPU的基本思想对不平衡数据进行分类。主要步骤是:将未标注实例全标为反类,和正例一起训练单边抽样Bagging学习器,将得到的学习器对未标注实例分类得到可靠的反例(RN),再用正例和RN训练SSBagging学习器。使用Rocchio和EM进行分类是Liu等提出的一种有代表性的LPU。比较了这种LPU和该文提出的方法,发现当数据的不平衡性很明显时,后者要优于前者。  相似文献   

7.
关于AdaBoost有效性的分析   总被引:13,自引:1,他引:12  
在机器学习领域,弱学习定理指明只要能够寻找到比随机猜测略好的弱学习算法,则可以通过一定方式,构造出任意误差精度的强学习算法.基于该理论下最常用的方法有AdaBoost和Bagging.AdaBoost和Bagging的误差分析还不统一;AdaBoost使用的训练误差并不是真正的训练误差,而是基于样本权值的一种误差,是否合理需要解释;确保AdaBoost有效的条件也需要有直观的解释以便使用.在调整Bagging错误率并采取加权投票法后,对AdaBoost和Bagging的算法流程和误差分析进行了统一,在基于大数定理对弱学习定理进行解释与证明基础之上,对AdaBoost的有效性进行了分析.指出AdaBoost采取的样本权值调整策略其目的是确保正确分类样本分布的均匀性,其使用的训练误差与真正的训练误差概率是相等的,并指出了为确保AdaBoost的有效性在训练弱学习算法时需要遵循的原则,不仅对AdaBoost的有效性进行了解释,还为构造新集成学习算法提供了方法.还仿照AdaBoost对Bagging的训练集选取策略提出了一些建议.  相似文献   

8.
受级联结构的启示,提出了一种针对不平衡数据集分类的新方法,基于级联结构的Bagging分类方法。该方法通过在每一级剔除一部分多数类样本的方式使数据集逐步趋于平衡,并应用欠取样技术得到训练集,用Bagging算法训练分类器,最后把每一级训练到的分类器集成为一个新的分类器。在10个UCI数据集上的实验结果表明,该方法在查全率和F-value值上优于Bagging和AdaBoost。  相似文献   

9.
入侵检测领域的数据往往具有高维性及非线性特点,且其中含有大量的噪声、冗余及连续型属性,这就使得一般的模式分类方法不能对其进行有效的处理。为了进一步提高入侵检测效果,提出了基于邻域粗糙集的入侵检测集成算法。采用Bagging技术产生多个具有较大差异性的训练子集,针对入侵检测数据的连续型特点,在各训练子集上使用具有不同半径的邻域粗糙集模型进行属性约简,消除冗余与噪声,实现属性约简以提高属性子集的分类性能,同时也获得具有更大差异性的训练子集,采用SVM为分类器训练多个基分类器,以各基分类器的检测精度构造权重进行加权集成。KDD99数据集的仿真实验结果表明,该算法能有效地提高入侵检测的精度和效率,具有较高的泛化性和稳定性。  相似文献   

10.
基于Bagging的组合k-NN预测模型与方法   总被引:1,自引:0,他引:1  
k-近邻方法基于单一k值预测,无法兼顾不同实例可能存在的特征差异,总体预测精度难以保证.针对该问题,提出了一种基于Bagging的组合k-NN预测模型,并在此基础上实现了具有属性选择的Bgk-NN预测方法.该方法通过训练建立个性化预测模型集合,各模型独立生成未知实例预测值,并以各预测值的中位数作为组合预测结果.Bgk-NN预测可适用于包含离散值属性及连续值属性的各种类型数据集.标准数据集上的实验表明,Bgk-NN预测精度较之传统k-NN方法有了明显提高.  相似文献   

11.
The ensemble method is a powerful data mining paradigm, which builds a classification model by integrating multiple diversified component learners. Bagging is one of the most successful ensemble methods. It is made of bootstrap-inspired classifiers and uses these classifiers to get an aggregated classifier. However, in bagging, bootstrapped training sets become more and more similar as redundancy is increasing. Besides redundancy, any training set is usually subject to noise. Moreover, the training set might be imbalanced. Thus, each training instance has a different impact on the learning process. This paper explores some properties of the ensemble margin and its use in improving the performance of bagging. We introduce a new approach to measure the importance of training data in learning, based on the margin theory. Then, a new bagging method concentrating on critical instances is proposed. This method is more accurate than bagging and more robust than boosting. Compared to bagging, it reduces the bias while generally keeping the same variance. Our findings suggest that (a) examples with low margins tend to be more critical for the classifier performance; (b) examples with higher margins tend to be more redundant; (c) misclassified examples with high margins tend to be noisy examples. Our experimental results on 15 various data sets show that the generalization error of bagging can be reduced up to 2.5% and its resilience to noise strengthened by iteratively removing both typical and noisy training instances, reducing the training set size by up to 75%.  相似文献   

12.
Trimmed bagging   总被引:1,自引:0,他引:1  
Bagging has been found to be successful in increasing the predictive performance of unstable classifiers. Bagging draws bootstrap samples from the training sample, applies the classifier to each bootstrap sample, and then averages over all obtained classification rules. The idea of trimmed bagging is to exclude the bootstrapped classification rules that yield the highest error rates, as estimated by the out-of-bag error rate, and to aggregate over the remaining ones. In this note we explore the potential benefits of trimmed bagging. On the basis of numerical experiments, we conclude that trimmed bagging performs comparably to standard bagging when applied to unstable classifiers as decision trees, but yields better results when applied to more stable base classifiers, like support vector machines.  相似文献   

13.
股价预测对监管部门了解金融市场运行状况和投资者规避股市的高风险具有重要意义。提出了一种基于门控循环(gated recurrent unit,GRU)神经网络和装袋(Bagging)的方法,并将其应用于股指的预测研究。该模型通过Bagging方法处理训练数据集,在模型构建过程中引入随机性,并结合GRU模型预测股价,最终能够降低预测误差,提高预测准确性。通过4个数据集实验结果的对比发现:(1)GRU模型能够较好地预测股指数据,与另外两种单个模型相比,多数情况下具有更小的预测误差;(2)引入Bagging方法的GRU模型具有较强的预测能力,相比于三种基准模型(GRU、ELM、BP)有更小的预测误差和更高的预测稳定度。  相似文献   

14.
Ensembles of classifiers that are trained on different parts of the input space provide good results in general. As a popular boosting technique, AdaBoost is an iterative and gradient based deterministic method used for this purpose where an exponential loss function is minimized. Bagging is a random search based ensemble creation technique where the training set of each classifier is arbitrarily selected. In this paper, a genetic algorithm based ensemble creation approach is proposed where both resampled training sets and classifier prototypes evolve so as to maximize the combined accuracy. The objective function based random search procedure of the resultant system guided by both ensemble accuracy and diversity can be considered to share the basic properties of bagging and boosting. Experimental results have shown that the proposed approach provides better combined accuracies using a fewer number of classifiers than AdaBoost.  相似文献   

15.
An Efficient Method To Estimate Bagging's Generalization Error   总被引:3,自引:0,他引:3  
Bagging (Breiman, 1994a) is a technique that tries to improve a learning algorithm's performance by using bootstrap replicates of the training set (Efron & Tibshirani, 1993, Efron, 1979). The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive, for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m times, where m is the size of the training set and is the number of replicates. This paper presents several techniques for estimating the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm (beyond that of the bagging itself), as is required by cross-validation-based estimation. These techniques all exploit the bias-variance decomposition (Geman, Bienenstock & Doursat, 1992, Wolpert, 1996). The best of our estimators also exploits stacking (Wolpert, 1992). In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sets. This suggests a novel justification for using bagging—more accurate estimation of the generalization error than is possible without bagging.  相似文献   

16.
Randomizing Outputs to Increase Prediction Accuracy   总被引:3,自引:0,他引:3  
Breiman  Leo 《Machine Learning》2000,40(3):229-242
Bagging and boosting reduce error by changing both the inputs and outputs to form perturbed training sets, growing predictors on these perturbed training sets and combining them. An interesting question is whether it is possible to get comparable performance by perturbing the outputs alone. Two methods of randomizing outputs are experimented with. One is called output smearing and the other output flipping. Both are shown to consistently do better than bagging.  相似文献   

17.
Out-of-bag样本的应用研究   总被引:3,自引:0,他引:3  
张春霞  郭高 《软件》2011,(3):1-4
Bagging集成通过组合不稳定的基分类器在很大程度上降低"弱"学习算法的分类误差,Out-of-bag样本是Bagging集成的自然产物。目前,Out-of-bag样本在估计Bagging集成的泛化误差、构建相关集成分类器等方面得到了广泛应用。文章对Out-of-bag样本的应用进行了综述,阐述了对其进行研究的主要内容和特点,并对它在将来可能的研究方向进行了讨论。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号