首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
: A robust character of combining diverse classifiers using a majority voting has recently been illustrated in the pattern recognition literature. Furthermore, negatively correlated classifiers turned out to offer further improvement of the majority voting performance even comparing to the idealised model with independent classifiers. However, negatively correlated classifiers represent a very unlikely situation in real-world classification problems, and their benefits usually remain out of reach. Nevertheless, it is theoretically possible to obtain a 0% majority voting error using a finite number of classifiers at error levels lower than 50%. We attempt to show that structuring classifiers into relevant multistage organisations can widen this boundary, as well as the limits of majority voting error, even more. Introducing discrete error distributions for analysis, we show how majority voting errors and their limits depend upon the parameters of a multiple classifier system with hardened binary outputs (correct/incorrect). Moreover, we investigate the sensitivity of boundary distributions of classifier outputs to small discrepancies modelled by the random changes of votes, and propose new more stable patterns of boundary distributions. Finally, we show how organising classifiers into different structures can be used to widen the limits of majority voting errors, and how this phenomenon can be effectively exploited. Received: 17 November 2000, Received in revised form: 27 November 2001, Accepted: 29 November 2001 ID="A1" Correspondence and offprint requests to: D. Ruta, Applied Computing Research Unit, Division of Computer and Information Systems, University of Paisley, High Street, Paisley PA1 2BE, UK. Email: ruta-ci0@paisley.ac.uk  相似文献   

2.
We present attribute bagging (AB), a technique for improving the accuracy and stability of classifier ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classifiers are built. The induced classifiers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classification accuracy and voting using only the best subsets further improves the resulting performance of the ensemble.  相似文献   

3.
Multiple classifier systems (MCS) are attracting increasing interest in the field of pattern recognition and machine learning. Recently, MCS are also being introduced in the remote sensing field where the importance of classifier diversity for image classification problems has not been examined. In this article, Satellite Pour l'Observation de la Terre (SPOT) IV panchromatic and multispectral satellite images are classified into six land cover classes using five base classifiers: contextual classifier, k-nearest neighbour classifier, Mahalanobis classifier, maximum likelihood classifier and minimum distance classifier. The five base classifiers are trained with the same feature sets throughout the experiments and a posteriori probability, derived from the confusion matrix of these base classifiers, is applied to five Bayesian decision rules (product rule, sum rule, maximum rule, minimum rule and median rule) for constructing different combinations of classifier ensembles. The performance of these classifier ensembles is evaluated for overall accuracy and kappa statistics. Three statistical tests, the McNemar's test, the Cochran's Q test and the Looney's F-test, are used to examine the diversity of the classification results of the base classifiers compared to the results of the classifier ensembles. The experimental comparison reveals that (a) significant diversity amongst the base classifiers cannot enhance the performance of classifier ensembles; (b) accuracy improvement of classifier ensembles can only be found by using base classifiers with similar and low accuracy; (c) increasing the number of base classifiers cannot improve the overall accuracy of the MCS and (d) none of the Bayesian decision rules outperforms the others.  相似文献   

4.
In this article the effectiveness of some recently developed genetic algorithm-based pattern classifiers was investigated in the domain of satellite imagery which usually have complex and overlapping class boundaries. Landsat data, SPOT image and IRS image are considered as input. The superiority of these classifiers over k-NN rule, Bayes' maximum likelihood classifier and multilayer perceptron (MLP) for partitioning different landcover types is established. Results based on producer's accuracy (percentage recognition score), user's accuracy and kappa values are provided. Incorporation of the concept of variable length chromosomes and chromosome discrimination led to superior performance in terms of automatic evolution of the number of hyperplanes for modelling the class boundaries, and the convergence time. This non-parametric classifier requires very little a priori information, unlike k-NN rule and MLP (where the performance depends heavily on the value of k and the architecture, respectively), and Bayes' maximum likelihood classifier (where assumptions regarding the class distribution functions need to be made).  相似文献   

5.
In handwritten pattern recognition, the multiple classifier system has been shown to be useful for improving recognition rates. One of the most important tasks in optimizing a multiple classifier system is to select a group of adequate classifiers, known as an Ensemble of Classifiers (EoC), from a pool of classifiers. Static selection schemes select an EoC for all test patterns, and dynamic selection schemes select different classifiers for different test patterns. Nevertheless, it has been shown that traditional dynamic selection performs no better than static selection. We propose four new dynamic selection schemes which explore the properties of the oracle concept. Our results suggest that the proposed schemes, using the majority voting rule for combining classifiers, perform better than the static selection method.  相似文献   

6.
Voting ensembles for spoken affect classification   总被引:1,自引:0,他引:1  
Affect or emotion classification from speech has much to benefit from ensemble classification methods. In this paper we apply a simple voting mechanism to an ensemble of classifiers and attain a modest performance increase compared to the individual classifiers. A natural emotional speech database was compiled from 11 speakers. Listener-judges were used to validate the emotional content of the speech. Thirty-eight prosody-based features correlating characteristics of speech with emotional states were extracted from the data. A classifier ensemble was designed using a multi-layer perceptron, support vector machine, K* instance-based learner, K-nearest neighbour, and random forest of decision trees. A simple voting scheme determined the most popular prediction. The accuracy of the ensemble is compared with the accuracies of the individual classifiers.  相似文献   

7.
Classifier combination methods have proved to be an effective tool to increase the performance of classification techniques that can be used in any pattern recognition applications. Despite a significant number of publications describing successful classifier combination implementations, the theoretical basis is still not matured enough and achieved improvements are inconsistent. In this paper, we propose a novel statistical validation technique known as correlation‐based classifier combination technique for combining classifier in any pattern recognition problem. This validation has significant influence on the performance of combinations, and their utilization is necessary for complete theoretical understanding of combination algorithms. The analysis presented is statistical in nature but promises to lead to a class of algorithms for rank‐based decision combination. The potentials of the theoretical and practical issues in implementation are illustrated by applying it on 2 standard datasets in pattern recognition domain, namely, handwritten digit recognition and letter image recognition datasets taken from UCI Machine Learning Database Repository ( http://www.ics.uci.edu/_mlearn ). 1 An empirical evaluation using 8 well‐known distinct classifiers confirms the validity of our approach compared to some other combinations of multiple classifiers algorithms. Finally, we also suggest a methodology for determining the best mix of individual classifiers.  相似文献   

8.
: Cardiotocography (CTG) represents the fetus’s health inside the womb during labor. However, assessment of its readings can be a highly subjective process depending on the expertise of the obstetrician. Digital signals from fetal monitors acquire parameters (i.e., fetal heart rate, contractions, acceleration). Objective:: This paper aims to classify the CTG readings containing imbalanced healthy, suspected, and pathological fetus readings. Method:: We perform two sets of experiments. Firstly, we employ five classifiers: Random Forest (RF), Adaptive Boosting (AdaBoost), Categorical Boosting (CatBoost), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LGBM) without over-sampling to classify CTG readings into three categories: healthy, suspected, and pathological. Secondly, we employ an ensemble of the above-described classifiers with the over-sampling method. We use a random over-sampling technique to balance CTG records to train the ensemble models. We use 3602 CTG readings to train the ensemble classifiers and 1201 records to evaluate them. The outcomes of these classifiers are then fed into the soft voting classifier to obtain the most accurate results. Results:: Each classifier evaluates accuracy, Precision, Recall, F1-scores, and Area Under the Receiver Operating Curve (AUROC) values. Results reveal that the XGBoost, LGBM, and CatBoost classifiers yielded 99% accuracy. Conclusion:: Using ensemble classifiers over a balanced CTG dataset improves the detection accuracy compared to the previous studies and our first experiment. A soft voting classifier then eliminates the weakness of one individual classifier to yield superior performance of the overall model.  相似文献   

9.
10.
Predicting future stock index price movement has always been a fascinating research area both for the investors who wish to yield a profit by trading stocks and for the researchers who attempt to expose the buried information from the complex stock market time series data. This prediction problem can be addressed as a binary classification problem with two class labels, one for the increasing movement and other for the decreasing movement. In literature, a wide range of classifiers has been tested for this application. As the performance of individual classifier varies for a diverse dataset with respect to different performance measures, it is impractical to acknowledge a specific classifier to be the best one. Hence, designing an efficient classifier ensemble instead of an individual classifier is fetching increasing attention from many researchers. Again selection of base classifiers and deciding their preferences in ensemble with respect to a variety of performance criteria can be considered as a Multi Criteria Decision Making (MCDM) problem. In this paper, an integrated TOPSIS Crow Search based weighted voting classifier ensemble is proposed for stock index price movement prediction. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), one of the popular MCDM techniques, is suggested for ranking and selecting a set of base classifiers for the ensemble whereas the weights of the classifiers used in the ensemble are tuned by the Crow Search method. The proposed ensemble model is validated for prediction of stock index price over the historical prices of BSE SENSEX, S&P500 and NIFTY 50 stock indices. The model has shown better performance compared to individual classifiers and other ensemble models such as majority voting, weighted voting, differential evolution and particle swarm optimization based classifier ensemble.  相似文献   

11.
根据情感的连续空间模型,提出一种改进的排序式选举算法,实现多个情感分类器的融合,取得了很好的情感识别效果。首先以隐马尔可夫模型(HMM)和人工神经网络(ANN)为基础,设计了三种分类器;然后用改进的排序式选举算法,实现对三种分类器的融合。分别利用普通话情感语音库和德语情感语音库进行实验,结果表明,与几种传统融合算法相比,改进的排序式选举法能够取得更好的融合效果,其识别性能明显优于单分类器。该算法不仅简单,而且可移植性好,可用于其他任意多个情感分类器的融合。  相似文献   

12.
A number of earlier studies that have attempted a theoretical analysis of majority voting assume independence of the classifiers. We formulate the majority voting problem as an optimization problem with linear constraints. No assumptions on the independence of classifiers are made. For a binary classification problem, given the accuracies of the classifiers in the team, the theoretical upper and lower bounds for performance obtained by combining them through majority voting are shown to be solutions of the corresponding optimization problem. The objective function of the optimization problem is nonlinear in the case of an even number of classifiers when rejection is allowed, for the other cases the objective function is linear and hence the problem is a linear program (LP). Using the framework we provide some insights and investigate the relationship between two candidate classifier diversity measures and majority voting performance.  相似文献   

13.
多分类器融合能有效集成多种分类算法的优势,实现优势互补,提高智能诊断模型的稳健性和诊断精度。但在利用多数投票法构建多分类器融合决策系统时,要求成员分类器数目多于要识别的设备状态数,否则会出现无法融合的情况。针对此问题,提出了一种基于二叉树的多分类器融合算法,利用二叉树将多类分类问题转化为多个二值分类问题,从而各个节点上的成员分类器个数只要大于2即可,有效避免了成员分类器数目不足的问题。实验结果表明,相比单一分类器的诊断方法,该方法能有效地实现滚动轴承故障智能诊断,并具有对各神经网络初始值不敏感、识别率高且稳定等优势。  相似文献   

14.
Remote sensing image classification is a common application of remote sensing images. In order to improve the performance of Remote sensing image classification, multiple classifier combinations are used to classify the Landsat-8 Operational Land Imager (Landsat-8 OLI) images. Some techniques and classifier combination algorithms are investigated. The classifier ensemble consisting of five member classifiers is constructed. The results of every member classifier are evaluated. The voting strategy is experimented to combine the classification results of the member classifier. The results show that all the classifiers have different performances and the multiple classifier combination provides better performance than a single classifier, and achieves higher overall accuracy of classification. The experiment shows that the multiple classifier combination using producer’s accuracy as voting-weight (MCCmod2 and MCCmod3) present higher classification accuracy than the algorithm using overall accuracy as voting-weight (MCCmod1).And the multiple classifier combinations using different voting-weights affected the classification result in different land-cover types. The multiple classifier combination algorithm presented in this article using voting-weight based on the accuracy of multiple classifier may have stability problems, which need to be addressed in future studies.  相似文献   

15.
目的 随着3D扫描技术和虚拟现实技术的发展,真实物体的3D识别方法已经成为研究的热点之一。针对现有基于深度学习的方法训练时间长,识别效果不理想等问题,提出了一种结合感知器残差网络和超限学习机(ELM)的3D物体识别方法。方法 以超限学习机的框架为基础,使用多层感知器残差网络学习3D物体的多视角投影特征,并利用提取的特征数据和已知的标签数据同时训练了ELM分类层、K最近邻(KNN)分类层和支持向量机(SVM)分类层识别3D物体。网络使用增加了多层感知器的卷积层替代传统的卷积层。卷积网络由改进的残差单元组成,包含多个卷积核个数恒定的并行残差通道,用于拟合不同数学形式的残差项函数。网络中半数卷积核参数和感知器参数以高斯分布随机产生,其余通过训练寻优得到。结果 提出的方法在普林斯顿3D模型数据集上达到了94.18%的准确率,在2D的NORB数据集上达到了97.46%的准确率。该算法在两个国际标准数据集中均取得了当前最好的效果。同时,使用超限学习机框架使得本文算法的训练时间比基于深度学习的方法减少了3个数量级。结论 本文提出了一种使用多视角图识别3D物体的方法,实验表明该方法比现有的ELM方法和深度学习等最新方法的识别率更高,抗干扰性更强,并且其调节参数少,收敛速度快。  相似文献   

16.
Abstract: The paper presents a novel machine learning algorithm used for training a compound classifier system that consists of a set of area classifiers. Area classifiers recognize objects derived from the respective competence area. Splitting feature space into areas and selecting area classifiers are two key processes of the algorithm; both take place simultaneously in the course of an optimization process aimed at maximizing the system performance. An evolutionary algorithm is used to find the optimal solution. A number of experiments have been carried out to evaluate system performance. The results prove that the proposed method outperforms each elementary classifier as well as simple voting.  相似文献   

17.
In this paper we investigate the combination of four machine learning methods for text categorization using Dempster's rule of combination. These methods include Support Vector Machine (SVM), kNN (Nearest Neighbor), kNN model-based approach (kNNM), and Rocchio. We first present a general representation of the outputs of different classifiers, in particular, modeling it as a piece of evidence by using a novel evidence structure called focal element triplet. Furthermore, we investigate an effective method for combining pieces of evidence derived from classifiers generated by a 10-fold cross-validation. Finally, we evaluate our methods on the 20-newsgroup and Reuters-21578 benchmark data sets and perform the comparative analysis with majority voting in combining multiple classifiers along with the previous result. Our experimental results show that the best combined classifier can improve the performance of the individual classifiers and Dempster's rule of combination outperforms majority voting in combining multiple classifiers.  相似文献   

18.
Automatic emotion recognition from speech signals is one of the important research areas, which adds value to machine intelligence. Pitch, duration, energy and Mel-frequency cepstral coefficients (MFCC) are the widely used features in the field of speech emotion recognition. A single classifier or a combination of classifiers is used to recognize emotions from the input features. The present work investigates the performance of the features of Autoregressive (AR) parameters, which include gain and reflection coefficients, in addition to the traditional linear prediction coefficients (LPC), to recognize emotions from speech signals. The classification performance of the features of AR parameters is studied using discriminant, k-nearest neighbor (KNN), Gaussian mixture model (GMM), back propagation artificial neural network (ANN) and support vector machine (SVM) classifiers and we find that the features of reflection coefficients recognize emotions better than the LPC. To improve the emotion recognition accuracy, we propose a class-specific multiple classifiers scheme, which is designed by multiple parallel classifiers, each of which is optimized to a class. Each classifier for an emotional class is built by a feature identified from a pool of features and a classifier identified from a pool of classifiers that optimize the recognition of the particular emotion. The outputs of the classifiers are combined by a decision level fusion technique. The experimental results show that the proposed scheme improves the emotion recognition accuracy. Further improvement in recognition accuracy is obtained when the scheme is built by including MFCC features in the pool of features.  相似文献   

19.
Various fusion functions for classifier combination have been designed to optimize the results of ensembles of classifiers (EoC). We propose a pairwise fusion matrix (PFM) transformation, which produces reliable probabilities for the use of classifier combination and can be amalgamated with most existent fusion functions for combining classifiers. The PFM requires only crisp class label outputs from classifiers, and is suitable for high-class problems or problems with few training samples. Experimental results suggest that the performance of a PFM can be a notch above that of the simple majority voting rule (MAJ), and a PFM can work on problems where a behavior-knowledge space (BKS) might not be applicable.  相似文献   

20.
In this paper, a measure of competence based on random classification (MCR) for classifier ensembles is presented. The measure selects dynamically (i.e. for each test example) a subset of classifiers from the ensemble that perform better than a random classifier. Therefore, weak (incompetent) classifiers that would adversely affect the performance of a classification system are eliminated. When all classifiers in the ensemble are evaluated as incompetent, the classification accuracy of the system can be increased by using the random classifier instead. Theoretical justification for using the measure with the majority voting rule is given. Two MCR based systems were developed and their performance was compared against six multiple classifier systems using data sets taken from the UCI Machine Learning Repository and Ludmila Kuncheva Collection. The systems developed had typically the highest classification accuracies regardless of the ensemble type used (homogeneous or heterogeneous).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号