首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
In this paper, we theoretically analyze some properties that relate Fisher's classifier and the optimal quadratic classifier, when the latter is derived utilizing a particular covariance matrix for the classes. We propose an efficient approach which is used to select the threshold after a linear transformation onto the one-dimensional space is performed. We achieve this by selecting the decision boundary that minimizes the classification error in the transformed space, assuming that the univariate random variables are normally distributed. Our empirical results on synthetic and real-life data sets show that our approach lead to smaller classification error than the traditional Fisher's classifier. The results also demonstrate that minimizing the classification error in the transformed space leads to smaller classification error in the original multi-dimensional space.  相似文献   

2.
We propose an eigenvector-based heteroscedastic linear dimension reduction (LDR) technique for multiclass data. The technique is based on a heteroscedastic two-class technique which utilizes the so-called Chernoff criterion, and successfully extends the well-known linear discriminant analysis (LDA). The latter, which is based on the Fisher criterion, is incapable of dealing with heteroscedastic data in a proper way. For the two-class case, the between-class scatter is generalized so to capture differences in (co)variances. It is shown that the classical notion of between-class scatter can be associated with Euclidean distances between class means. From this viewpoint, the between-class scatter is generalized by employing the Chernoff distance measure, leading to our proposed heteroscedastic measure. Finally, using the results from the two-class case, a multiclass extension of the Chernoff criterion is proposed. This criterion combines separation information present in the class mean as well as the class covariance matrices. Extensive experiments and a comparison with similar dimension reduction techniques are presented.  相似文献   

3.
We consider the well-studied pattern recognition problem of designing linear classifiers. When dealing with normally distributed classes, it is well known that the optimal Bayes classifier is linear only when the covariance matrices are equal. This was the only known condition for classifier linearity. In a previous work, we presented the theoretical framework for optimal pairwise linear classifiers for two-dimensional normally distributed random vectors. We derived the necessary and sufficient conditions that the distributions have to satisfy so as to yield the optimal linear classifier as a pair of straight lines.In this paper we extend the previous work to d-dimensional normally distributed random vectors. We provide the necessary and sufficient conditions needed so that the optimal Bayes classifier is a pair of hyperplanes. Various scenarios have been considered including one which resolves the multi-dimensional Minskys paradox for the perceptron. We have also provided some three-dimensional examples for all the cases, and tested the classification accuracy of the corresponding pairwise-linear classifier. In all the cases, these linear classifiers achieve very good performance. To demonstrate that the current pairwise-linear philosophy yields superior discriminants on real-life data, we have shown how linear classifiers determined using a maximum-likelihood estimate (MLE) applicable for this approach, yield better accuracy than the discriminants obtained by the traditional Fisher's classifier on a real-life data set. The multi-dimensional generalization of the MLE for these classifiers is currently being investigated.  相似文献   

4.
Optimal Bayesian linear classifiers have been studied in the literature for many decades. We demonstrate that all the known results consider only the scenario when the quadratic polynomial has coincident roots. Indeed, we present a complete analysis of the case when the optimal classifier between two normally distributed classes is pairwise and linear. We focus on some special cases of the normal distribution with nonequal covariance matrices. We determine the conditions that the mean vectors and covariance matrices have to satisfy in order to obtain the optimal pairwise linear classifier. As opposed to the state of the art, in all the cases discussed here, the linear classifier is given by a pair of straight lines, which is a particular case of the general equation of second degree. We also provide some empirical results, using synthetic data for the Minsky's paradox case, and demonstrated that the linear classifier achieves very good performance. Finally, we have tested our approach on real life data obtained from the UCI machine learning repository. The empirical results that we obtained show the superiority of our scheme over the traditional Fisher's discriminant classifier  相似文献   

5.
This paper presents further discussion and development of the two-parameter Fisher criterion and describes its two modifications (weighted criterion and another multiclass form). The criteria are applied in two algorithms for training linear sequential classifiers. The main idea of the first algorithm is separating the outermost class from the others. The second algorithm, which is a generalization of the first one, is based on the idea of linear division of classes into two subsets. As linear division of classes is not always satisfactory, a piecewise-linear version of the sequential algorithm is proposed as well. The computational complexity of different algorithms is analyzed. All methods are verified on artificial and real-life data sets.  相似文献   

6.
This paper presents a strategy to improve the AdaBoost algorithm with a quadratic combination of base classifiers. We observe that learning this combination is necessary to get better performance and is possible by constructing an intermediate learner operating on the combined linear and quadratic terms. This is not trivial, as the parameters of the base classifiers are not under direct control, obstructing the application of direct optimization. We propose a new method realizing iterative optimization indirectly. First we train a classifier by randomizing the labels of training examples. Subsequently, the input learner is called repeatedly with a systematic update of the labels of the training examples in each round. We show that the quadratic boosting algorithm converges under the condition that the given base learner minimizes the empirical error. We also give an upper bound on the VC-dimension of the new classifier. Our experimental results on 23 standard problems show that quadratic boosting compares favorably with AdaBoost on large data sets at the cost of training speed. The classification time of the two algorithms, however, is equivalent.  相似文献   

7.
Several studies have demonstrated the superior performance of ensemble classification algorithms, whereby multiple member classifiers are combined into one aggregated and powerful classification model, over single models. In this paper, two rotation-based ensemble classifiers are proposed as modeling techniques for customer churn prediction. In Rotation Forests, feature extraction is applied to feature subsets in order to rotate the input data for training base classifiers, while RotBoost combines Rotation Forest with AdaBoost. In an experimental validation based on data sets from four real-life customer churn prediction projects, Rotation Forest and RotBoost are compared to a set of well-known benchmark classifiers. Moreover, variations of Rotation Forest and RotBoost are compared, implementing three alternative feature extraction algorithms: principal component analysis (PCA), independent component analysis (ICA) and sparse random projections (SRP). The performance of rotation-based ensemble classifier is found to depend upon: (i) the performance criterion used to measure classification performance, and (ii) the implemented feature extraction algorithm. In terms of accuracy, RotBoost outperforms Rotation Forest, but none of the considered variations offers a clear advantage over the benchmark algorithms. However, in terms of AUC and top-decile lift, results clearly demonstrate the competitive performance of Rotation Forests compared to the benchmark algorithms. Moreover, ICA-based Rotation Forests outperform all other considered classifiers and are therefore recommended as a well-suited alternative classification technique for the prediction of customer churn that allows for improved marketing decision making.  相似文献   

8.
Nonlinear classification has been a non-trivial task in machine learning for the past decades. In recent years, kernel machines have successfully generalized the inner-product based linear classifiers to nonlinear ones by transforming data into some high or infinite dimensional feature space. However, due to their implicit space transformation and unobservable latent feature space, it is hard to have an intuitive understanding of their working mechanism. In this paper, we propose a comprehensible framework for nonlinear classifier design, called Manifold Mapping Machine (M3). M3 can generalize any linear classifier to nonlinear by transforming data into some low-dimensional feature space explicitly. To demonstrate the effectiveness of M3 framework, we further present an algorithmic implementation of M3 named Supervised Spectral Space Classifier (S3C). Compared with the kernel classifiers, S3C can achieve similar or even better data separation by mapping data into the low-dimensional spectral space, allowing both of its mapped data and new feature space to be examined directly. Moreover, with the discriminative information integrated into the spectral space transformation, the classification performance of S3C is more robust than that of the kernel classifiers. Experimental results show that S3C is superior to other state-of-the-art nonlinear classifiers on both synthetic and real-world data sets.  相似文献   

9.
The majority of the available classification systems focus on the minimization of the classification error rate. This is not always a suitable metric specially when dealing with two-class problems with skewed classes and cost distributions. In this case, an effective criterion to measure the quality of a decision rule is the area under the Receiver Operating Characteristic curve (AUC) that is also useful to measure the ranking quality of a classifier as required in many real applications. In this paper we propose a nonparametric linear classifier based on the maximization of AUC. The approach lies on the analysis of the Wilcoxon–Mann–Whitney statistic of each single feature and on an iterative pairwise coupling of the features for the optimization of the ranking of the combined feature. By the pairwise feature evaluation the proposed procedure is essentially different from other classifiers using AUC as a criterion. Experiments performed on synthetic and real data sets and comparisons with previous approaches confirm the effectiveness of the proposed method.  相似文献   

10.
Hidden Markov models (HMMs) have been shown to provide a high level performance for detecting anomalies in sequences of system calls to the operating system kernel. Using Boolean conjunction and disjunction functions to combine the responses of multiple HMMs in the ROC space may significantly improve performance over a “single best” HMM. However, these techniques assume that the classifiers are conditional independent, and their of ROC curves are convex. These assumptions are violated in most real-world applications, especially when classifiers are designed using limited and imbalanced training data. In this paper, the iterative Boolean combination (IBC) technique is proposed for efficient fusion of the responses from multiple classifiers in the ROC space. It applies all Boolean functions to combine the ROC curves corresponding to multiple classifiers, requires no prior assumptions, and its time complexity is linear with the number of classifiers. The results of computer simulations conducted on both synthetic and real-world host-based intrusion detection data indicate that the IBC of responses from multiple HMMs can achieve a significantly higher level of performance than the Boolean conjunction and disjunction combinations, especially when training data are limited and imbalanced. The proposed IBC is general in that it can be employed to combine diverse responses of any crisp or soft one- or two-class classifiers, and for wide range of application domains.  相似文献   

11.
Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and class-related subspaces. Secondly, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problem-suited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernel-induced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels.  相似文献   

12.
Combining feature reduction and case selection in building CBR classifiers   总被引:4,自引:0,他引:4  
CBR systems that are built for the classification problems are called CBR classifiers. This paper presents a novel and fast approach to building efficient and competent CBR classifiers that combines both feature reduction (FR) and case selection (CS). It has three central contributions: 1) it develops a fast rough-set method based on relative attribute dependency among features to compute the approximate reduct, 2) it constructs and compares different case selection methods based on the similarity measure and the concepts of case coverage and case reachability, and 3) CBR classifiers built using a combination of the FR and CS processes can reduce the training burden as well as the need to acquire domain knowledge. The overall experimental results demonstrating on four real-life data sets show that the combined FR and CS method can preserve, and may also improve, the solution accuracy while at the same time substantially reducing the storage space. The case retrieval time is also greatly reduced because the use of CBR classifier contains a smaller amount of cases with fewer features. The developed FR and CS combination method is also compared with the kernel PCA and SVMs techniques. Their storage requirement, classification accuracy, and classification speed are presented and discussed.  相似文献   

13.
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures-many of which are heuristic in nature-have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.  相似文献   

14.
This paper presents a method for improved ensemble learning, by treating the optimization of an ensemble of classifiers as a compressed sensing problem. Ensemble learning methods improve the performance of a learned predictor by integrating a weighted combination of multiple predictive models. Ideally, the number of models needed in the ensemble should be minimized, while optimizing the weights associated with each included model. We solve this problem by treating it as an example of the compressed sensing problem, in which a sparse solution must be reconstructed from an under-determined linear system. Compressed sensing techniques are then employed to find an ensemble which is both small and effective. An additional contribution of this paper, is to present a new performance evaluation method (a new pairwise diversity measurement) called the roulette-wheel kappa-error. This method takes into account the different weightings of the classifiers, and also reduces the total number of pairs of classifiers needed in the kappa-error diagram, by selecting pairs through a roulette-wheel selection method according to the weightings of the classifiers. This approach can greatly improve the clarity and informativeness of the kappa-error diagram, especially when the number of classifiers in the ensemble is large. We use 25 different public data sets to evaluate and compare the performance of compressed sensing ensembles using four different sparse reconstruction algorithms, combined with two different classifier learning algorithms and two different training data manipulation techniques. We also give the comparison experiments of our method against another five state-of-the-art pruning methods. These experiments show that our method produces comparable or better accuracy, while being significantly faster than the compared methods.  相似文献   

15.
Finite mixture models are widely used to perform model-based clustering of multivariate data sets. Most of the existing mixture models work with linear data; whereas, real-life applications may involve multivariate data having both circular and linear characteristics. No existing mixture models can accommodate such correlated circular–linear data. In this paper, we consider designing a mixture model for multivariate data having one circular variable. In order to construct a circular–linear joint distribution with proper inclusion of correlation terms, we use the semi-wrapped Gaussian distribution. Further, we construct a mixture model (termed SWGMM) of such joint distributions. This mixture model is capable of approximating the distribution of multi-modal circular–linear data. An unsupervised learning of the mixture parameters is proposed based on expectation maximization method. Clustering is performed using maximum a posteriori criterion. To evaluate the performance of SWGMM, we choose the task of color image segmentation in LCH space. We present comprehensive results and compare SWGMM with existing methods. Our study reveals that the proposed mixture model outperforms the other methods in most cases.  相似文献   

16.
提出了一种基于线性分类器的混合颜色空间查找表颜色分类方法,该方法主要解决颜色查找表分类方法的区分能力受颜色空间选择、阈值确定等因素影响而难以区分近似颜色的问题。将模式识别中的线性分类器思想应用于颜色查找表映射关系的建立,并通过同时使用HSI空间与YUV空间的方法提高查找表对近似颜色的区分能力。实验结果表明,基于线性分类器的混合空间查找表颜色分类方法具有查找表建立原则简单、效果直观的特点,并且对近似颜色有较强的区分能力,适用于彩色图像的快速颜色分割。  相似文献   

17.
This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1-NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance. Received: July 18, 2001 / Accepted: September 28, 2001  相似文献   

18.
Multi-output regression aims at learning a mapping from an input feature space to a multivariate output space. Previous algorithms define the loss functions using a fixed global coordinate of the output space, which is equivalent to assuming that the output space is a whole Euclidean space with a dimension equal to the number of the outputs. So the underlying structure of the output space is completely ignored. In this paper, we consider the output space as a Riemannian submanifold to incorporate its geometric structure into the regression process. To this end, we propose a novel mechanism, called locally linear transformation (LLT), to define the loss functions on the output manifold. In this way, currently existing regression algorithms can be improved. In particular, we propose an algorithm under the support vector regression framework. Our experimental results on synthetic and real-life data are satisfactory.  相似文献   

19.
Improving constrained pattern mining with first-fail-based heuristics   总被引:1,自引:0,他引:1  
In this paper, we present a general framework to mine patterns with antimonotone constraints. This framework uses a technique that structures the pattern space in a way that facilitates the integration of constraints within the mining process. Furthermore, we also introduce a powerful strategy that uses background information on the data to speed-up the mining process. We illustrate our approach on a popular structured data mining problem, the frequent subgraph mining problem, and show, through experiments on synthetic and real-life data, that this general approach has advantages over state-of-the-art pattern mining algorithms.  相似文献   

20.
Prototype-based classification relies on the distances between the examples to be classified and carefully chosen prototypes. A small set of prototypes is of interest to keep the computational complexity low, while maintaining high classification accuracy. An experimental study of some old and new prototype optimisation techniques is presented, in which the prototypes are either selected or generated from the given data. These condensing techniques are evaluated on real data, represented in vector spaces, by comparing their resulting reduction rates and classification performance.Usually the determination of prototypes is studied in relation with the nearest neighbour rule. We will show that the use of more general dissimilarity-based classifiers can be more beneficial. An important point in our study is that the adaptive condensing schemes here discussed allow the user to choose the number of prototypes freely according to the needs. If such techniques are combined with linear dissimilarity-based classifiers, they provide the best trade-off of small condensed sets and high classification accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号