首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
It is well-known that the applicability of both linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) to high-dimensional pattern classification tasks such as face recognition (FR) often suffers from the so-called “small sample size” (SSS) problem arising from the small number of available training samples compared to the dimensionality of the sample space. In this paper, we propose a new QDA like method that effectively addresses the SSS problem using a regularization technique. Extensive experimentation performed on the FERET database indicates that the proposed methodology outperforms traditional methods such as Eigenfaces, direct QDA and direct LDA in a number of SSS setting scenarios.  相似文献   

2.
This paper develops a generalized nonlinear discriminant analysis (GNDA) method and deals with its small sample size (SSS) problems. GNDA is a nonlinear extension of linear discriminant analysis (LDA), while kernel Fisher discriminant analysis (KFDA) can be regarded as a special case of GNDA. In LDA, an under sample problem or a small sample size problem occurs when the sample size is less than the sample dimensionality, which will result in the singularity of the within-class scatter matrix. Due to a high-dimensional nonlinear mapping in GNDA, small sample size problems arise rather frequently. To tackle this issue, this research presents five different schemes for GNDA to solve the SSS problems. Experimental results on real-world data sets show that these schemes for GNDA are very effective in tackling small sample size problems.  相似文献   

3.
The goal of face recognition is to distinguish persons via their facial images. Each person's images form a cluster, and a new image is recognized by assigning it to the correct cluster. Since the images are very high-dimensional, it is necessary to reduce their dimension. Linear discriminant analysis (LDA) has been shown to be effective at dimension reduction while preserving the cluster structure of the data. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular restricts its application to datasets in which the dimension of the data does not exceed the sample size. For face recognition, however, the dimension typically exceeds the number of images in the database, resulting in what is referred to as the small sample size problem. Recently, the applicability of LDA has been extended by using the generalized singular value decomposition (GSVD) to circumvent the nonsingularity requirement, thus making LDA directly applicable to face recognition data. Our experiments confirm that LDA/GSVD solves the small sample size problem very effectively as compared with other current methods.  相似文献   

4.
This paper addresses the small sample size problem in linear discriminant analysis, which occurs in face recognition applications. Belhumeur et al. [IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711-720] proposed the FisherFace method. We find out that the FisherFace method might fail since after the PCA transform the corresponding within class covariance matrix can still be singular, this phenomenon is verified with the Yale face database. Hence we propose to use an inverse Fisher criteria. Our method works when the number of training images per class is one. Experiment results suggest that this new approach performs well.  相似文献   

5.
Exponential locality preserving projections for small sample size problem   总被引:1,自引:0,他引:1  
Locality preserving projections (LPP) is a widely used manifold reduced dimensionality technique. However, it suffers from two problems: (1) small sample size problem and (2) the performance is sensitive to the neighborhood size k. In order to address these problems, we propose an exponential locality preserving projections (ELPP) by introducing the matrix exponential in this paper. ELPP avoids the singular of the matrices and obtains more valuable information for LPP. The experiments are conducted on three public face databases, ORL, Yale and Georgia Tech. The results show that the performances of ELPP is better than those of LPP and the state-of-the-art LPP Improved1.  相似文献   

6.
An improved manifold learning method, called enhanced semi-supervised local Fisher discriminant analysis (ESELF), for face recognition is proposed. Motivated by the fact that statistically uncorrelated and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on eigen decomposition. Experiments on synthetic data and AT&T, Yale and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The experimental results and comparisons demonstrate the effectiveness of the proposed method.  相似文献   

7.
In this paper we present a new implementation for the null space based linear discriminant analysis. The main features of our implementation include: (i) the optimal transformation matrix is obtained easily by only orthogonal transformations without computing any eigendecomposition and singular value decomposition (SVD), consequently, our new implementation is eigendecomposition-free and SVD-free; (ii) its main computational complexity is from a economic QR factorization of the data matrix and a economic QR factorization of a n×n matrix with column pivoting, here n is the sample size, thus our new implementation is a fast one. The effectiveness of our new implementation is demonstrated by some real-world data sets.  相似文献   

8.
In this paper, we present an efficient algorithm to solve the most discriminant vectors of LDA for high-dimensional data set. The experiments on ORL face database confirm the effectiveness of the proposed method.  相似文献   

9.
For linear discriminant analysis (LDA), the ratio trace and trace ratio are two basic criteria generalized from the classical Fisher criterion function, while the orthogonal and uncorrelated constraints are two common conditions imposed on the optimal linear transformation. The ratio trace criterion with both the orthogonal and uncorrelated constraints have been extensively studied in the literature, whereas the trace ratio criterion receives less interest mainly due to the lack of a closed-form solution and efficient algorithms. In this paper, we make an extensive study on the uncorrelated trace ratio linear discriminant analysis, with particular emphasis on the application on the undersampled problem. Two regularization uncorrelated trace ratio LDA models are discussed for which the global solutions are characterized and efficient algorithms are established. Experimental comparison on several LDA approaches are conducted on several real world datasets, and the results show that the uncorrelated trace ratio LDA is competitive with the orthogonal trace ratio LDA, but is better than the results based on ratio trace criteria in terms of the classification performance.  相似文献   

10.
Classification of high-dimensional statistical data is usually not amenable to standard pattern recognition techniques because of an underlying small sample size problem. To address the problem of high-dimensional data classification in the face of a limited number of samples, a novel principal component analysis (PCA) based feature extraction/classification scheme is proposed. The proposed method yields a piecewise linear feature subspace and is particularly well-suited to difficult recognition problems where achievable classification rates are intrinsically low. Such problems are often encountered in cases where classes are highly overlapped, or in cases where a prominent curvature in data renders a projection onto a single linear subspace inadequate. The proposed feature extraction/classification method uses class-dependent PCA in conjunction with linear discriminant feature extraction and performs well on a variety of real-world datasets, ranging from digit recognition to classification of high-dimensional bioinformatics and brain imaging data.  相似文献   

11.
董琰 《计算机工程与设计》2012,33(4):1591-1594,1681
为了解决高维小样本数据的分类中Fisherface思想判别分析方法的不足,在最大散度差准则的基础上,提出了利用多线性子空间技术对每类样本进行单独描述的方法,该方法能更准确地反映样本在类内类间的分布关系.在分类中不是以距离作为判别依据,而是按照贝叶斯决策规则得到的隶属置信度作为衡量标准.实验结果表明了该方法的有效性,和同类方法相比,有更高的识别率.  相似文献   

12.
We propose an innovative technique, geometric linear discriminant analysis (Geometric LDA), to reduce the complexity of pattern recognition systems by using a linear transformation to lower the dimension of the observation space. We experimentally compare Geometric LDA to other dimensionality reduction methods found in the literature. We show that Geometric LDA produces the same and in many cases a significantly better linear transformation than other methods found in the literature.  相似文献   

13.
Mixture discriminant analysis (MDA) and subclass discriminant analysis (SDA) belong to the supervised classification approaches. They have advantage over the standard linear discriminant analysis (LDA) in large sample size problems, since both of them divide the samples in each class into subclasses which keep locality but LDA does not. However, since the current MDA and SDA algorithms perform subclass division in just one step in the original data space before solving the generalized eigenvalue problem, two problems are exposed: (1) they ignore the relation among classes since subclass division is performed in each isolated class; (2) they cannot guarantee good performance of classifiers in the transformed space, because locality in the original data space may not be kept in the transformed space. To address these problems, this paper presents a new approach for subclass division based on k-means clustering in the projected space, class by class using the iterative steps under EM-alike framework. Experiments are performed on the artificial data set, the UCI machine learning data sets, the CENPARMI handwritten numeral database, the NUST603 handwritten Chinese character database, and the terrain cover database. Extensive experimental results demonstrate the performance advantages of the proposed method.  相似文献   

14.
基于向量组的Fisher线性鉴别分析方法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于向量组的Fisher线性鉴别分析方法。该方法先将原始的高维向量分割为低维的子向量组,再对向量组运用Fisher线性鉴别分析。这种处理方法,不但能够解决任意高维下的小样本问题,而且通过选择恰当的子向量维数,可以从向量中抽取出最有效的特征值。此外,基于向量组的Fisher线性鉴别分析是Fisher线性鉴别分析和二维Fisher线性鉴别分析的进一步推广。  相似文献   

15.
This paper provides a unifying view of three discriminant linear feature extraction methods: linear discriminant analysis, heteroscedastic discriminant analysis and maximization of mutual information. We propose a model-independent reformulation of the criteria related to these three methods that stresses their similarities and elucidates their differences. Based on assumptions for the probability distribution of the classification data, we obtain sufficient conditions under which two or more of the above criteria coincide. It is shown that these conditions also suffice for Bayes optimality of the criteria. Our approach results in an information-theoretic derivation of linear discriminant analysis and heteroscedastic discriminant analysis. Finally, regarding linear discriminant analysis, we discuss its relation to multidimensional independent component analysis and derive suboptimality bounds based on information theory.  相似文献   

16.
Many pattern recognition applications involve the treatment of high-dimensional data and the small sample size problem. Principal component analysis (PCA) is a common used dimension reduction technique. Linear discriminate analysis (LDA) is often employed for classification. PCA plus LDA is a famous framework for discriminant analysis in high-dimensional space and singular cases. In this paper, we examine the theory of this framework and find out that even if there is no small sample size problem the PCA dimension reduction cannot guarantee the subsequent successful application of LDA. We thus develop an improved discriminate analysis method by introducing an inverse Fisher criterion and adding a constrain in PCA procedure so that the singularity phenomenon will not occur. Experiment results on face recognition suggest that this new approach works well and can be applied even when the number of training samples is one per class.  相似文献   

17.
The linear discriminant analysis (LDA) is a linear classifier which has proven to be powerful and competitive compared to the main state-of-the-art classifiers. However, the LDA algorithm assumes the sample vectors of each class are generated from underlying multivariate normal distributions of common covariance matrix with different means (i.e., homoscedastic data). This assumption has restricted the use of LDA considerably. Over the years, authors have defined several extensions to the basic formulation of LDA. One such method is the heteroscedastic LDA (HLDA) which is proposed to address the heteroscedasticity problem. Another method is the nonparametric DA (NDA) where the normality assumption is relaxed. In this paper, we propose a novel Bayesian logistic discriminant (BLD) model which can address both normality and heteroscedasticity problems. The normality assumption is relaxed by approximating the underlying distribution of each class with a mixture of Gaussians. Hence, the proposed BLD provides more flexibility and better classification performances than the LDA, HLDA and NDA. A subclass and multinomial versions of the BLD are proposed. The posterior distribution of the BLD model is elegantly approximated by a tractable Gaussian form using variational transformation and Jensen's inequality, allowing a straightforward computation of the weights. An extensive comparison of the BLD to the LDA, support vector machine (SVM), HLDA, NDA and subclass discriminant analysis (SDA), performed on artificial and real data sets, has shown the advantages and superiority of our proposed method. In particular, the experiments on face recognition have clearly shown a significant improvement of the proposed BLD over the LDA.  相似文献   

18.
In the last decade, many variants of classical linear discriminant analysis (LDA) have been developed to tackle the under-sampled problem in face recognition. However, choosing the variants is not easy since these methods involve eigenvalue decomposition that makes cross-validation computationally expensive. In this paper, we propose to solve this problem by unifying these LDA variants in one framework: principal component analysis (PCA) plus constrained ridge regression (CRR). In CRR, one selects the target (also called class indicator) for each class, and finds a projection to locate the class centers at their class targets and the transform minimizes the within-class distances with a penalty on the transform norm as in ridge regression. Under this framework, many existing LDA methods can be viewed as PCA+CRR with particular regularization numbers and class indicators and we propose to choose the best LDA method as choosing the best member from the CRR family. The latter can be done by comparing their leave-one-out (LOO) errors and we present an efficient algorithm, which requires similar computations to the training process of CRR, to evaluate the LOO errors. Experiments on Yale Face B, Extended Yale B and CMU-PIE databases are conducted to demonstrate the effectiveness of the proposed methods.  相似文献   

19.
人脸识别中线性判别分析的单参数正则化方法   总被引:2,自引:1,他引:1  
将线性判别分析(LDA)应用于人脸识别中时,小样本问题常常出现,即,通常可获得的人脸训练样本个数远小于训练样本的维数,从而导致类内散布矩阵Sw奇异,于是得到病态的特征值问题.使用数学工具探讨了这一现象的实质.此外,提出了一种单参数正则化方法来解决小样本问题,该方法以满足tr(S'w)=tr(Sw)为条件,用一个可逆矩阵S'w去估计奇异的类内散布矩阵Sw.在使用小波变换对人脸像降维预处理后进行了该方法与传统LDA的对比实验.实验表明,该方法可大幅提高LDA的识别性能.  相似文献   

20.
In two very recently published rapid and brief communications, both from the same authors, an alternative formulation of the well-known Fisher criterion is presented in order to overcome the ‘small sample problem’. A theorem in the first of the two communications provides the basis for the equivalence of both formulations.By providing a simple counterexample, we disprove the theorem. Subsequently, based on an illustrative example, we demonstrate that their criterion differs from the classical one and argue that the proposed criterion is not a suitable measure of discriminability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号