首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 281 毫秒
1.
从多个弱分类器重构出强分类器的集成学习方法是机器学习领域的重要研究方向之一。尽管已有多种多样性基本分类器的生成方法被提出,但这些方法的鲁棒性仍有待提高。递减样本集成学习算法综合了目前最为流行的boosting与bagging算法的学习思想,通过不断移除训练集中置信度较高的样本,使训练集空间依次递减,使得某些被低估的样本在后续的分类器中得到充分训练。该策略形成一系列递减的训练子集,因而也生成一系列多样性的基本分类器。类似于boosting与bagging算法,递减样本集成学习方法采用投票策略对基本分类器进行整合。通过严格的十折叠交叉检验,在8个UCI数据集与7种基本分类器上的测试表明,递减样本集成学习算法总体上要优于boosting与bagging算法。  相似文献   

2.
Ant colony optimization (ACO) algorithms have been successfully applied in data classification, which aim at discovering a list of classification rules. However, due to the essentially random search in ACO algorithms, the lists of classification rules constructed by ACO-based classification algorithms are not fixed and may be distinctly different even using the same training set. Those differences are generally ignored and some beneficial information cannot be dug from the different data sets, which may lower the predictive accuracy. To overcome this shortcoming, this paper proposes a novel classification rule discovery algorithm based on ACO, named AntMinermbc, in which a new model of multiple rule sets is presented to produce multiple lists of rules. Multiple base classifiers are built in AntMinermbc, and each base classifier is expected to remedy the weakness of other base classifiers, which can improve the predictive accuracy by exploiting the useful information from various base classifiers. A new heuristic function for ACO is also designed in our algorithm, which considers both of the correlation and coverage for the purpose to avoid deceptive high accuracy. The performance of our algorithm is studied experimentally on 19 publicly available data sets and further compared to several state-of-the-art classification approaches. The experimental results show that the predictive accuracy obtained by our algorithm is statistically higher than that of the compared targets.  相似文献   

3.
朱亮  徐华  成金海  朱深 《计算机应用》2022,42(7):2022-2029
针对自适应增强(AdaBoost)算法的基分类器线性组合效率低以及过度关注难分样本的问题,提出了基于间隔理论的两种改进算法WPIAda与WPIAda.M。首先,WPIAda与WPIAda.M算法都将样本权值的更新分为四种情形,从而增加间隔从正到负变化的样本权值来抑制间隔的负向移动,并减少间隔处于零点的样本数量;其次,WPIAda.M算法根据基分类器的错误率与样本权重的分布状态,给出新的基分类器系数求解方法,从而提高基分类器的组合效率。在10个UCI数据集上,与dfAda、skAda、swaAda等算法相比,WPIAda和WPIAda.M算法的测试误差分别平均降低了7.46个百分点和7.64个百分点;AUC分别提高了11.65个百分点和11.92个百分点。实验结果表明,WPIAda和WPIAda.M算法可以有效降低对难分样本的关注,并且WPIAda.M算法能够更高效地集成基分类器,因此两种算法均可进一步提高分类性能。  相似文献   

4.
Training set resampling based ensemble design techniques are successfully used to reduce the classification errors of the base classifiers. Boosting is one of the techniques used for this purpose where each training set is obtained by drawing samples with replacement from the available training set according to a weighted distribution which is modified for each new classifier to be included in the ensemble. The weighted resampling results in a classifier set, each being accurate in different parts of the input space mainly specified the sample weights. In this study, a dynamic integration of boosting based ensembles is proposed so as to take into account the heterogeneity of the input sets. An evidence-theoretic framework is developed for this purpose so as to take into account the weights and distances of the neighboring training samples in both training and testing boosting based ensembles. The effectiveness of the proposed technique is compared to the AdaBoost algorithm using three different base classifiers.  相似文献   

5.
Several researchers have shown that substantial improvements can be achieved in difficult pattern recognition problems by combining the outputs of multiple neural networks. In this work, we present and test a pattern classification multi-net system based on both supervised and unsupervised learning. Following the ‘divide-and-conquer’ framework, the input space is partitioned into overlapping subspaces and neural networks are subsequently used to solve the respective classification subtasks. Finally, the outputs of individual classifiers are appropriately combined to obtain the final classification decision. Two clustering methods have been applied for input space partitioning and two schemes have been considered for combining the outputs of the multiple classifiers. Experiments on well-known data sets indicate that the multi-net classification system exhibits promising performance compared with the case of single network training, both in terms of error rates and in terms of training speed (especially if the training of the classifiers is done in parallel). ID="A1"Correspondance and offprint requests to: D. Frosyniotis, National Technical University of Athens, Department of Electrical and Computer Engineering, Zographou 157 73, Athens, Greece. E-mail: andreas@cs.ntua.gr  相似文献   

6.
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures-many of which are heuristic in nature-have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.  相似文献   

7.
Real-life datasets are often imbalanced, that is, there are significantly more training samples available for some classes than for others, and consequently the conventional aim of reducing overall classification accuracy is not appropriate when dealing with such problems. Various approaches have been introduced in the literature to deal with imbalanced datasets, and are typically based on oversampling, undersampling or cost-sensitive classification. In this paper, we introduce an effective ensemble of cost-sensitive decision trees for imbalanced classification. Base classifiers are constructed according to a given cost matrix, but are trained on random feature subspaces to ensure sufficient diversity of the ensemble members. We employ an evolutionary algorithm for simultaneous classifier selection and assignment of committee member weights for the fusion process. Our proposed algorithm is evaluated on a variety of benchmark datasets, and is confirmed to lead to improved recognition of the minority class, to be capable of outperforming other state-of-the-art algorithms, and hence to represent a useful and effective approach for dealing with imbalanced datasets.  相似文献   

8.
This paper proposes a method for combining multiple tree classifiers based on both classifier ensemble (bagging) and dynamic classifier selection schemes (DCS). The proposed method is composed of the following procedures: (1) building individual tree classifiers based on bootstrap samples; (2) calculating the distance between all possible two trees; (3) clustering the trees based on single linkage clustering; (4) selecting two clusters by local region in terms of accuracy and error diversity; and (5) voting the results of tree classifiers selected in the two clusters. Empirical evaluation using publicly available data sets confirms the superiority of our proposed approach over other classifier combining methods.  相似文献   

9.
Although both online learning and kernel learning have been studied extensively in machine learning, there is limited effort in addressing the intersecting research problems of these two important topics. As an attempt to fill the gap, we address a new research problem, termed Online Multiple Kernel Classification (OMKC), which learns a kernel-based prediction function by selecting a subset of predefined kernel functions in an online learning fashion. OMKC is in general more challenging than typical online learning because both the kernel classifiers and the subset of selected kernels are unknown, and more importantly the solutions to the kernel classifiers and their combination weights are correlated. The proposed algorithms are based on the fusion of two online learning algorithms, i.e., the Perceptron algorithm that learns a classifier for a given kernel, and the Hedge algorithm that combines classifiers by linear weights. We develop stochastic selection strategies that randomly select a subset of kernels for combination and model updating, thus improving the learning efficiency. Our empirical study with 15 data sets shows promising performance of the proposed algorithms for OMKC in both learning efficiency and prediction accuracy.  相似文献   

10.
This paper presents cluster‐based ensemble classifier – an approach toward generating ensemble of classifiers using multiple clusters within classified data. Clustering is incorporated to partition data set into multiple clusters of highly correlated data that are difficult to separate otherwise and different base classifiers are used to learn class boundaries within the clusters. As the different base classifiers engage on different difficult‐to‐classify subsets of the data, the learning of the base classifiers is more focussed and accurate. A selection rather than fusion approach achieves the final verdict on patterns of unknown classes. The impact of clustering on the learning parameters and accuracy of a number of learning algorithms including neural network, support vector machine, decision tree and k‐NN classifier is investigated. A number of benchmark data sets from the UCI machine learning repository were used to evaluate the cluster‐based ensemble classifier and the experimental results demonstrate its superiority over bagging and boosting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号