首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
It is well-known that the applicability of both linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) to high-dimensional pattern classification tasks such as face recognition (FR) often suffers from the so-called “small sample size” (SSS) problem arising from the small number of available training samples compared to the dimensionality of the sample space. In this paper, we propose a new QDA like method that effectively addresses the SSS problem using a regularization technique. Extensive experimentation performed on the FERET database indicates that the proposed methodology outperforms traditional methods such as Eigenfaces, direct QDA and direct LDA in a number of SSS setting scenarios.  相似文献   

2.
Many pattern recognition applications involve the treatment of high-dimensional data and the small sample size problem. Principal component analysis (PCA) is a common used dimension reduction technique. Linear discriminate analysis (LDA) is often employed for classification. PCA plus LDA is a famous framework for discriminant analysis in high-dimensional space and singular cases. In this paper, we examine the theory of this framework and find out that even if there is no small sample size problem the PCA dimension reduction cannot guarantee the subsequent successful application of LDA. We thus develop an improved discriminate analysis method by introducing an inverse Fisher criterion and adding a constrain in PCA procedure so that the singularity phenomenon will not occur. Experiment results on face recognition suggest that this new approach works well and can be applied even when the number of training samples is one per class.  相似文献   

3.
This paper addresses the small sample size problem in linear discriminant analysis, which occurs in face recognition applications. Belhumeur et al. [IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711-720] proposed the FisherFace method. We find out that the FisherFace method might fail since after the PCA transform the corresponding within class covariance matrix can still be singular, this phenomenon is verified with the Yale face database. Hence we propose to use an inverse Fisher criteria. Our method works when the number of training images per class is one. Experiment results suggest that this new approach performs well.  相似文献   

4.
An improved manifold learning method, called enhanced semi-supervised local Fisher discriminant analysis (ESELF), for face recognition is proposed. Motivated by the fact that statistically uncorrelated and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on eigen decomposition. Experiments on synthetic data and AT&T, Yale and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The experimental results and comparisons demonstrate the effectiveness of the proposed method.  相似文献   

5.
In this paper, we propose a novel bagging null space locality preserving discriminant analysis (bagNLPDA) method for facial feature extraction and recognition. The bagNLPDA method first projects all the training samples into the range space of a so-called locality preserving total scatter matrix without losing any discriminative information. The projected training samples are then randomly sampled using bagging to generate a set of bootstrap replicates. Null space discriminant analysis is performed in each replicate and the results of them are combined using majority voting. As a result, the proposed method aggregates a set of complementary null space locality preserving discriminant classifiers. Experiments on FERET and PIE subsets demonstrate the effectiveness of bagNLPDA.  相似文献   

6.
A complete fuzzy discriminant analysis approach for face recognition   总被引:4,自引:0,他引:4  
In this paper, some studies have been made on the essence of fuzzy linear discriminant analysis (F-LDA) algorithm and fuzzy support vector machine (FSVM) classifier, respectively. As a kernel-based learning machine, FSVM is represented with the fuzzy membership function while realizing the same classification results with that of the conventional pair-wise classification. It outperforms other learning machines especially when unclassifiable regions still remain in those conventional classifiers. However, a serious drawback of FSVM is that the computation requirement increases rapidly with the increase of the number of classes and training sample size. To address this problem, an improved FSVM method that combines the advantages of FSVM and decision tree, called DT-FSVM, is proposed firstly. Furthermore, in the process of feature extraction, a reformative F-LDA algorithm based on the fuzzy k-nearest neighbors (FKNN) is implemented to achieve the distribution information of each original sample represented with fuzzy membership grade, which is incorporated into the redefinition of the scatter matrices. In particular, considering the fact that the outlier samples in the patterns may have some adverse influence on the classification result, we developed a novel F-LDA algorithm using a relaxed normalized condition in the definition of fuzzy membership function. Thus, the classification limitation from the outlier samples is effectively alleviated. Finally, by making full use of the fuzzy set theory, a complete F-LDA (CF-LDA) framework is developed by combining the reformative F-LDA (RF-LDA) feature extraction method and DT-FSVM classifier. This hybrid fuzzy algorithm is applied to the face recognition problem, extensive experimental studies conducted on the ORL and NUST603 face images databases demonstrate the effectiveness of the proposed algorithm.  相似文献   

7.
Face recognition based on a novel linear discriminant criterion   总被引:1,自引:0,他引:1  
As an effective technique for feature extraction and pattern classification Fisher linear discriminant (FLD) has been successfully applied in many fields. However, for a task with very high-dimensional data such as face images, conventional FLD technique encounters a fundamental difficulty caused by singular within-class scatter matrix. To avoid the trouble, many improvements on the feature extraction aspect of FLD have been proposed. In contrast, studies on the pattern classification aspect of FLD are quiet few. In this paper, we will focus our attention on the possible improvement on the pattern classification aspect of FLD by presenting a novel linear discriminant criterion called maximum scatter difference (MSD). Theoretical analysis demonstrates that MSD criterion is a generalization of Fisher discriminant criterion, and is the asymptotic form of discriminant criterion: large margin linear projection. The performance of MSD classifier is tested in face recognition. Experiments performed on the ORL, Yale, FERET and AR databases show that MSD classifier can compete with top-performance linear classifiers such as linear support vector machines, and is better than or equivalent to combinations of well known facial feature extraction methods, such as eigenfaces, Fisherfaces, orthogonal complementary space, nullspace, direct linear discriminant analysis, and the nearest neighbor classifier.
Fengxi SongEmail:
  相似文献   

8.
This study presents a novel kernel discriminant transformation (KDT) algorithm for face recognition based on image sets. As each image set is represented by a kernel subspace, we formulate a KDT matrix that maximizes the similarities of within-kernel subspaces, and simultaneously minimizes those of between-kernel subspaces. Although the KDT matrix cannot be computed explicitly in a high-dimensional feature space, we propose an iterative kernel discriminant transformation algorithm to solve the matrix in an implicit way. Another perspective of similarity measure, namely canonical difference, is also addressed for matching each pair of the kernel subspaces, and employed to simplify the formulation. The proposed face recognition system is demonstrated to outperform existing still-image-based as well as image set-based face recognition methods using the Yale Face database B, Labeled Faces in the Wild and a self-compiled database.  相似文献   

9.
Although 2DLDA algorithm obtains higher recognition accuracy, a vital unresolved problem of 2DLDA is that it needs huge feature matrix for the task of face recognition. To overcome this problem, this paper presents an efficient approach for face image feature extraction, namely, (2D)2LDA method. Experimental results on ORL and Yale database show that the proposed method obtains good recognition accuracy despite having less number of coefficients.  相似文献   

10.
In this paper, a novel statistical generative model to describe a face is presented, and is applied to the face authentication task. Classical generative models used so far in face recognition, such as Gaussian Mixture Models (GMMs) and Hidden Markov Models (HMMs) for instance, are making strong assumptions on the observations derived from a face image. Indeed, such models usually assume that local observations are independent, which is obviously not the case in a face. The presented model hence proposes to encode relationships between salient facial features by using a static Bayesian Network. Since robustness against imprecisely located faces is of great concern in a real-world scenario, authentication results are presented using automatically localised faces. Experiments conducted on the XM2VTS and the BANCA databases showed that the proposed approach is suitable for this task, since it reaches state-of-the-art results. We compare our model to baseline appearance-based systems (Eigenfaces and Fisherfaces) but also to classical generative models, namely GMM, HMM and pseudo-2DHMM.  相似文献   

11.
We propose a subspace distance measure to analyze the similarity between intrapersonal face subspaces, which characterize the variations between face images of the same individual. We call the conventional intrapersonal subspace the average intrapersonal subspace (AIS) because the image differences often come from a large number of persons. We call an intrapersonal subspace specific intrapersonal subspace (SIS) if the image differences are from just one person. We demonstrate that SIS varies from person to person and most SISs are not similar to AIS. Based on these observations, we introduce the maximum a posteriori (MAP) adaptation to the problem of SIS estimation, and apply it to the Bayesian face recognition algorithm. Experimental results show that the adaptive Bayesian algorithm outperforms the non-adaptive Bayesian algorithm as well as Eigenface and Fisherface methods when a small number of adaptation images are available.  相似文献   

12.
In this paper, we propose a new discriminant locality preserving projections based on maximum margin criterion (DLPP/MMC). DLPP/MMC seeks to maximize the difference, rather than the ratio, between the locality preserving between-class scatter and locality preserving within-class scatter. DLPP/MMC is theoretically elegant and can derive its discriminant vectors from both the range of the locality preserving between-class scatter and the range space of locality preserving within-class scatter. DLPP/MMC can also derive its discriminant vectors from the null space of locality preserving within-class scatter when the parameter of DLPP/MMC approaches +∞. Experiments on the ORL, Yale, FERET, and PIE face databases show the effectiveness of the proposed DLPP/MMC.  相似文献   

13.
人脸识别中线性判别分析的单参数正则化方法   总被引:2,自引:1,他引:1  
将线性判别分析(LDA)应用于人脸识别中时,小样本问题常常出现,即,通常可获得的人脸训练样本个数远小于训练样本的维数,从而导致类内散布矩阵Sw奇异,于是得到病态的特征值问题.使用数学工具探讨了这一现象的实质.此外,提出了一种单参数正则化方法来解决小样本问题,该方法以满足tr(S'w)=tr(Sw)为条件,用一个可逆矩阵S'w去估计奇异的类内散布矩阵Sw.在使用小波变换对人脸像降维预处理后进行了该方法与传统LDA的对比实验.实验表明,该方法可大幅提高LDA的识别性能.  相似文献   

14.
The goal of face recognition is to distinguish persons via their facial images. Each person's images form a cluster, and a new image is recognized by assigning it to the correct cluster. Since the images are very high-dimensional, it is necessary to reduce their dimension. Linear discriminant analysis (LDA) has been shown to be effective at dimension reduction while preserving the cluster structure of the data. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular restricts its application to datasets in which the dimension of the data does not exceed the sample size. For face recognition, however, the dimension typically exceeds the number of images in the database, resulting in what is referred to as the small sample size problem. Recently, the applicability of LDA has been extended by using the generalized singular value decomposition (GSVD) to circumvent the nonsingularity requirement, thus making LDA directly applicable to face recognition data. Our experiments confirm that LDA/GSVD solves the small sample size problem very effectively as compared with other current methods.  相似文献   

15.
We propose subspace distance measures to analyze the similarity between intrapersonal face subspaces, which characterize the variations between face images of the same individual. We call the conventional intrapersonal subspace average intrapersonal subspace (AIS) because the image differences often come from a large number of persons. An intrapersonal subspace is referred to as specific intrapersonal subspace (SIS) if the image differences are from just one person. We demonstrate that SIS varies significantly from person to person, and most SISs are not similar to AIS. Based on these observations, we introduce the maximum a posteriori (MAP) adaptation to the problem of SIS estimation, and apply it to the Bayesian face recognition algorithm. Experimental results show that the adaptive Bayesian algorithm outperforms the non-adaptive Bayesian algorithm as well as Eigenface and Fisherface methods if a small number of adaptation images are available.  相似文献   

16.
17.
In this paper, we present counter arguments against the direct LDA algorithm (D-LDA), which was previously claimed to be equivalent to Linear Discriminant Analysis (LDA). We show from Bayesian decision theory that D-LDA is actually a special case of LDA by directly taking the linear space of class means as the LDA solution. The pooled covariance estimate is completely ignored. Furthermore, we demonstrate that D-LDA is not equivalent to traditional subspace-based LDA in dealing with the Small Sample Size problem. As a result, D-LDA may impose a significant performance limitation in general applications.  相似文献   

18.
Classification based on Fisher's linear discriminant analysis (FLDA) is challenging when the number of variables largely exceeds the number of given samples. The original FLDA needs to be carefully modified and with high dimensionality implementation issues like reduction of storage costs are of crucial importance. Methods are reviewed for the high dimension/small sample size problem and the one closest, in some sense, to the classical regular approach is chosen. The implementation of this method with regard to computational and storage costs and numerical stability is improved. This is achieved through combining a variety of known and new implementation strategies. Experiments demonstrate the superiority, with respect to both overall costs and classification rates, of the resulting algorithm compared with other methods.  相似文献   

19.
This paper proposes a new subspace method that is based on image covariance obtained from windowed features of images. A windowed input feature consists of a number of pixels, and the dimension of input space is determined by the number of windowed features. Each element of an image covariance matrix can be obtained from the inner product of two windowed features. The 2D-PCA and 2D-LDA methods are then obtained from principal component analysis and linear discriminant analysis, respectively, using the image covariance matrix. In the case of 2D-LDA, there is no need for PCA preprocessing and the dimension of subspace can be greater than the number of classes because the within-class and between-class image covariance matrices have full ranks. Comparative experiments are performed using the FERET, CMU, and ORL databases of facial images. The experimental results show that the proposed 2D-LDA provides the best recognition rate among several subspace methods in all of the tests.  相似文献   

20.
提出两种基于矩阵分解的DLDA特征抽取算法。通过引入QR分解和谱分解(SF)两种矩阵分析方法,在DLDA鉴别准则下,对散布矩阵实现降维,从而得到描述人脸图像样本更有效和稳定的分类信息。该方法通过对两种矩阵分解过程的分析,证明在传统Fisher鉴别分析方法中,矩阵分解同样可以模拟PCA过程对样本进行降维,从而克服了小样本问题。在ORL人脸数据库上的实验结果验证了该算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号