首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 55 毫秒
1.
Rotation Forest, an effective ensemble classifier generation technique, works by using principal component analysis (PCA) to rotate the original feature axes so that different training sets for learning base classifiers can be formed. This paper presents a variant of Rotation Forest, which can be viewed as a combination of Bagging and Rotation Forest. Bagging is used here to inject more randomness into Rotation Forest in order to increase the diversity among the ensemble membership. The experiments conducted with 33 benchmark classification data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that the proposed method generally produces ensemble classifiers with lower error than Bagging, AdaBoost and Rotation Forest. The bias–variance analysis of error performance shows that the proposed method improves the prediction error of a single classifier by reducing much more variance term than the other considered ensemble procedures. Furthermore, the results computed on the data sets with artificial classification noise indicate that the new method is more robust to noise and kappa-error diagrams are employed to investigate the diversity–accuracy patterns of the ensemble classifiers.  相似文献   

2.
目前关于集成学习的泛化性能的研究已取得很大成功,但是关于集成学习的误差分析还需要进一步研究.考虑交叉验证在统计机器学习中对于模型性能评估有重要应用,为此,应用组块3×2交叉验证和k折交叉验证方法为每个样本点进行赋予权重的预测值的集成,并进行误差分析.在模拟数据和真实数据上进行实验,结果表明基于组块3×2交叉验证的集成学习预测误差小于单个学习器的预测误差,并且集成学习的方差比单个学习器方差小.与基于k折交叉验证的集成学习方法相比,基于组块3×2交叉验证的泛化误差小于基于k折交叉验证的泛化误差,说明基于组块3×2交叉验证的集成学习模型稳定性好.  相似文献   

3.
An ensemble of multiple classifiers is widely considered to be an effective technique for improving accuracy and stability of a single classifier. This paper proposes a framework of sparse ensembles and deals with new linear weighted combination methods for sparse ensembles. Sparse ensemble is to sparsely combine the outputs of multiple classifiers by using a sparse weight vector. When the continuous outputs of multiple classifiers are provided in our methods, the problem of solving sparse weight vector can be formulated as linear programming problems in which the hinge loss or/and the 1-norm regularization are exploited. Both the hinge loss and the 1-norm regularization are techniques inducing sparsity used in machine learning. We only ensemble classifiers with nonzero weight coefficients. In these LP-based methods, the ensemble training error is minimized while the weight vector of ensemble learning is controlled, which can be thought as implementing the structure risk minimization rule and naturally explains good performance of these methods. The promising experimental results over UCI data sets and the radar high-resolution range profile data are presented.  相似文献   

4.
基于动态加权的粗糙子空间集成   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于动态加权的粗糙子空间集成方法EROS-DW。利用粗糙集属性约简方法获得多个特征约简子集,并据此训练基分类器。在分类阶段,根据给定待测样本的具体特征动态地为每个基分类器指派相应的权重,采用加权投票组合规则集成各分类器的输出结果。利用UCI标准数据集对该方法的性能进行测试。实验结果表明,相较于经典的集成方法,EROS-DW方法可以获得更高的分类准确率。  相似文献   

5.
Dynamic weighting ensemble classifiers based on cross-validation   总被引:1,自引:1,他引:0  
Ensemble of classifiers constitutes one of the main current directions in machine learning and data mining. It is accepted that the ensemble methods can be divided into static and dynamic ones. Dynamic ensemble methods explore the use of different classifiers for different samples and therefore may get better generalization ability than static ensemble methods. However, for most of dynamic approaches based on KNN rule, additional part of training samples should be taken out for estimating “local classification performance” of each base classifier. When the number of training samples is not sufficient enough, it would lead to the lower accuracy of the training model and the unreliableness for estimating local performances of base classifiers, so further hurt the integrated performance. This paper presents a new dynamic ensemble model that introduces cross-validation technique in the process of local performances’ evaluation and then dynamically assigns a weight to each component classifier. Experimental results with 10 UCI data sets demonstrate that when the size of training set is not large enough, the proposed method can achieve better performances compared with some dynamic ensemble methods as well as some classical static ensemble approaches.  相似文献   

6.
We propose a two-layer decision fusion technique, called Fuzzy Stacked Generalization (FSG) which establishes a hierarchical distance learning architecture. At the base-layer of an FSG, fuzzy k-NN classifiers receive different feature sets each of which is extracted from the same dataset to gain multiple views of the dataset. At the meta-layer, first, a fusion space is constructed by aggregating decision spaces of all the base-layer classifiers. Then, a fuzzy k-NN classifier is trained in the fusion space by minimizing the difference between the large sample and N-sample classification error. In order to measure the degree of collaboration among the base-layer classifiers and the diversity of the feature spaces, a new measure called, shareability, is introduced. Shearability is defined as the number of samples that are correctly classified by at least one of the base-layer classifiers in FSG. In the experiments, we observe that FSG performs better than the popular distance learning and ensemble learning algorithms when the shareability measure is large enough such that most of the samples are correctly classified by at least one of the base-layer classifiers. The relationship between the proposed and state-of-the-art diversity measures is experimentally analyzed. The tests performed on a variety of artificial and real-world benchmark datasets show that the classification performance of FSG increases compared to that of state-of-the art ensemble learning and distance learning methods as the number of classes increases.  相似文献   

7.
Removing or filtering outliers and mislabeled instances prior to training a learning algorithm has been shown to increase classification accuracy, especially in noisy data sets. A popular approach is to remove any instance that is misclassified by a learning algorithm. However, the use of ensemble methods has also been shown to generally increase classification accuracy. In this paper, we extensively examine filtering and ensembling. We examine 9 learning algorithms individually and ensembled together as filtering algorithms as well as the effects of filtering in the 9 chosen learning algorithms on a set of 54 data sets. We compare the filtering results with using a majority voting ensemble. We find that the majority voting ensemble significantly outperforms filtering unless there are high amounts of noise present in the data set. Additionally, for most cases, using an ensemble of learning algorithms for filtering produces a greater increase in classification accuracy than using a single learning algorithm for filtering.  相似文献   

8.
The ensemble method is a powerful data mining paradigm, which builds a classification model by integrating multiple diversified component learners. Bagging is one of the most successful ensemble methods. It is made of bootstrap-inspired classifiers and uses these classifiers to get an aggregated classifier. However, in bagging, bootstrapped training sets become more and more similar as redundancy is increasing. Besides redundancy, any training set is usually subject to noise. Moreover, the training set might be imbalanced. Thus, each training instance has a different impact on the learning process. This paper explores some properties of the ensemble margin and its use in improving the performance of bagging. We introduce a new approach to measure the importance of training data in learning, based on the margin theory. Then, a new bagging method concentrating on critical instances is proposed. This method is more accurate than bagging and more robust than boosting. Compared to bagging, it reduces the bias while generally keeping the same variance. Our findings suggest that (a) examples with low margins tend to be more critical for the classifier performance; (b) examples with higher margins tend to be more redundant; (c) misclassified examples with high margins tend to be noisy examples. Our experimental results on 15 various data sets show that the generalization error of bagging can be reduced up to 2.5% and its resilience to noise strengthened by iteratively removing both typical and noisy training instances, reducing the training set size by up to 75%.  相似文献   

9.
Recently many topic models such as Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF) have made important progress towards generating high-level knowledge from a large corpus. However, these algorithms based on random initialization generate different results on the same corpus using the same parameters, denoted as instability problem. For solving this problem, ensembles of NMF are known to be much more stable and accurate than individual NMFs. However, training multiple NMFs for ensembling is computationally expensive. In this paper, we propose a novel scheme to obtain the seemingly contradictory goal of ensembling multiple NMFs without any additional training cost. We train a single NMF algorithm with the cyclical learning rate schedule, which can converge to several local minima along its optimization path. We save the results to the ensemble when the model converges, and then restart the optimization with a large learning rate that can help escape the current local minimum. Based on experiments performed on text corpora using a number of measures to assess, our method can reduce instability at no additional training cost, while simultaneously yields more accurate topic models than traditional single methods and ensemble methods.  相似文献   

10.
传统的雷电数据预测方法往往采用单一最优机器学习算法,较少考虑气象数据的时空变化等现象。针对该现象,提出一种基于集成策略的多机器学习短时雷电预报算法。首先,对气象数据进行属性约简,降低数据维度;其次,在数据集上训练多种异构机器学习分类器,并基于预测质量筛选最优基分类器;最后,通过对最优基分类器训练权重,并结合集成策略产生最终分类器。实验表明,该方法优于传统单最优方法,其平均预测准确率提高了9.5%。  相似文献   

11.
Learning from imperfect (noisy) information sources is a challenging and reality issue for many data mining applications. Common practices include data quality enhancement by applying data preprocessing techniques or employing robust learning algorithms to avoid developing overly complicated structures that overfit the noise. The essential goal is to reduce noise impact and eventually enhance the learners built from noise-corrupted data. In this paper, we propose a novel corrective classification (C2) design, which incorporates data cleansing, error correction, Bootstrap sampling and classifier ensembling for effective learning from noisy data sources. C2 differs from existing classifier ensembling or robust learning algorithms in two aspects. On one hand, a set of diverse base learners of C2 constituting the ensemble are constructed via a Bootstrap sampling process; on the other hand, C2 further improves each base learner by unifying error detection, correction and data cleansing to reduce noise impact. Being corrective, the classifier ensemble is built from data preprocessed/corrected by the data cleansing and correcting modules. Experimental comparisons demonstrate that C2 is not only more accurate than the learner built from original noisy sources, but also more reliable than Bagging [4] or aggressive classifier ensemble (ACE) [56], which are two degenerated components/variants of C2. The comparisons also indicate that C2 is more stable than Boosting and DECORATE, which are two state-of-the-art ensembling methods. For real-world imperfect information sources (i.e. noisy training and/or test data), C2 is able to deliver more accurate and reliable prediction models than its other peers can offer.  相似文献   

12.
Cost Complexity-Based Pruning of Ensemble Classifiers   总被引:1,自引:0,他引:1  
In this paper we study methods that combine multiple classification models learned over separate data sets. Numerous studies posit that such approaches provide the means to efficiently scale learning to large data sets, while also boosting the accuracy of individual classifiers. These gains, however, come at the expense of an increased demand for run-time system resources. The final ensemble meta-classifier may consist of a large collection of base classifiers that require increased memory resources while also slowing down classification throughput. Here, we describe an algorithm for pruning (i.e., discarding a subset of the available base classifiers) the ensemble meta-classifier as a means to reduce its size while preserving its accuracy and we present a technique for measuring the trade-off between predictive performance and available run-time system resources. The algorithm is independent of the method used initially when computing the meta-classifier. It is based on decision tree pruning methods and relies on the mapping of an arbitrary ensemble meta-classifier to a decision tree model. Through an extensive empirical study on meta-classifiers computed over two real data sets, we illustrate our pruning algorithm to be a robust and competitive approach to discarding classification models without degrading the overall predictive performance of the smaller ensemble computed over those that remain after pruning. Received 30 August 2000 / Revised 7 March 2001 / Accepted in revised form 21 May 2001  相似文献   

13.
为了提高面部表情的分类识别性能,基于集成学习理论,提出了一种二次优化选择性(Quadratic Optimization Choice, QOC)集成分类模型。首先,对于9个基分类器,依据性能进行排序,选择前30%的基分类器作为集成模型的候选基分类器。其次,依据组合规则产生集成模型簇。最后,对集成模型簇进行二次优化选择,选择具有最小泛化误差的集成分类器的子集,从而确定最优集成分类模型。为了验证QOC集成分类模型的性能,选择采用最大值、最小值和均值规则的集成模型作为对比模型,实验结果表明:相对基分类器,QOC集成分类模型取得了较好的分类效果,尤其是对于识别率较差的悲伤表情类,平均识别率提升了21.11%。相对于非选择性集成模型,QOC集成分类模型识别性能也有显著提高。  相似文献   

14.
AdaBoost算法是一种典型的集成学习框架,通过线性组合若干个弱分类器来构造成强学习器,其分类精度远高于单个弱分类器,具有很好的泛化误差和训练误差。然而AdaBoost 算法不能精简输出模型的弱分类器,因而不具备良好的可解释性。本文将遗传算法引入AdaBoost算法模型,提出了一种限制输出模型规模的集成进化分类算法(Ensemble evolve classification algorithm for controlling the size of final model,ECSM)。通过基因操作和评价函数能够在AdaBoost迭代框架下强制保留物种样本的多样性,并留下更好的分类器。实验结果表明,本文提出的算法与经典的AdaBoost算法相比,在基本保持分类精度的前提下,大大减少了分类器数量。  相似文献   

15.
In this paper, the concept of finding an appropriate classifier ensemble for named entity recognition is posed as a multiobjective optimization (MOO) problem. Our underlying assumption is that instead of searching for the best-fitting feature set for a particular classifier, ensembling of several classifiers those are trained using different feature representations could be a more fruitful approach, but it is crucial to determine the appropriate subset of classifiers that are most suitable for the ensemble. We use three heterogenous classifiers namely maximum entropy, conditional random field, and support vector machine in order to build a number of models depending upon the various representations of the available features. The proposed MOO-based ensemble technique is evaluated for three resource-constrained languages, namely Bengali, Hindi, and Telugu. Evaluation results yield the recall, precision, and F-measure values of 92.21, 92.72, and 92.46%, respectively, for Bengali; 97.07, 89.63, and 93.20%, respectively, for Hindi; and 80.79, 93.18, and 86.54%, respectively, for Telugu. We also evaluate our proposed technique with the CoNLL-2003 shared task English data sets that yield the recall, precision, and F-measure values of 89.72, 89.84, and 89.78%, respectively. Experimental results show that the classifier ensemble identified by our proposed MOO-based approach outperforms all the individual classifiers, two different conventional baseline ensembles, and the classifier ensemble identified by a single objective?Cbased approach. In a part of the paper, we formulate the problem of feature selection in any classifier under the MOO framework and show that our proposed classifier ensemble attains superior performance to it.  相似文献   

16.
选择性集成学习是为解决同一个问题而训练多个基分类器,并依据某种规则选取部分基分类器的结果进行整合的学习算法。通过选择性集成可以获得比单个学习器和全部集成学习更好的学习效果,可以显著地提高学习系统的泛化性能。提出了一种多层次选择性集成学习算法Ada_ens。试验结果表明,Ada_ens具有更好的学习效果和泛化性能。  相似文献   

17.
The automatic detection of construction materials in images acquired on a construction site has been regarded as a critical topic. Recently, several data mining techniques have been used as a way to solve the problem of detecting construction materials. These studies have applied single classifiers to detect construction materials—and distinguish them from the background—by using color as a feature. Recent studies suggest that combining multiple classifiers (into what is called a heterogeneous ensemble classifier) would show better performance than using a single classifier. However, the performance of ensemble classifiers in construction material detection is not fully understood. In this study, we investigated the performance of six single classifiers and potential ensemble classifiers on three data sets: one each for concrete, steel, and wood. A heterogeneous voting-based ensemble classifier was created by selecting base classifiers which are diverse and accurate; their prediction probabilities for each target class were averaged to yield a final decision for that class. In comparison with the single classifiers, the ensemble classifiers performed better in the three data sets overall. This suggests that it is better to use an ensemble classifier to enhance the detection of construction materials in images acquired on a construction site.  相似文献   

18.
点击欺诈是近年来最常见的网络犯罪手段之一,互联网广告行业每年都会因点击欺诈而遭受巨大损失。为了能够在海量点击中有效地检测欺诈点击,构建了多种充分结合广告点击与时间属性关系的特征,并提出了一种点击欺诈检测的集成学习框架——CAT-RFE集成学习框架。CAT-RFE集成学习框架包含3个部分:基分类器、递归特征消除(RFE,recursive feature elimination)和voting集成学习。其中,将适用于类别特征的梯度提升模型——CatBoost(categorical boosting)作为基分类器;RFE是基于贪心策略的特征选择方法,可在多组特征中选出较好的特征组合;Voting集成学习是采用投票的方式将多个基分类器的结果进行组合的学习方法。该框架通过CatBoost和RFE在特征空间中获取多组较优的特征组合,再在这些特征组合下的训练结果通过voting进行集成,获得集成的点击欺诈检测结果。该框架采用了相同的基分类器和集成学习方法,不仅克服了差异较大的分类器相互制约而导致集成结果不理想的问题,也克服了RFE在选择特征时容易陷入局部最优解的问题,具备更好的检测能力。在实际互联网点击欺诈数据集上的性能评估和对比实验结果显示,CAT-RFE集成学习框架的点击欺诈检测能力超过了CatBoost模型、CatBoost和RFE组合的模型以及其他机器学习模型,证明该框架具备良好的竞争力。该框架为互联网广告点击欺诈检测提供一种可行的解决方案。  相似文献   

19.
This paper presents a novel method for facial expression recognition that employs the combination of two different feature sets in an ensemble approach. A pool of base support vector machine classifiers is created using Gabor filters and Local Binary Patterns. Then a multi-objective genetic algorithm is used to search for the best ensemble using as objective functions the minimization of both the error rate and the size of the ensemble. Experimental results on JAFFE and Cohn-Kanade databases have shown the efficiency of the proposed strategy in finding powerful ensembles, which improves the recognition rates between 5% and 10% over conventional approaches that employ single feature sets and single classifiers.  相似文献   

20.
集成多个传感器的智能片上系统( SoC)在物联网得到了广泛的应用.在融合多个传感器数据的分类算法方面,传统的支持向量机( SVM)单分类器不能直接对传感器数据流进行小样本增量学习.针对上述问题,提出一种基于Bagging-SVM的集成增量算法,该算法通过在增量数据中采用Bootstrap方式抽取训练集,构造能够反映新信息变化的集成分类器,然后将新老分类器集成,实现集成增量学习.实验结果表明:该算法相比SVM单分类器能够有效降低分类误差,提高分类准确率,且具有较好的泛化能力,可以满足当下智能传感器系统基于小样本数据流的在线学习需求.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号