首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Face recognition using LDA-based algorithms   总被引:21,自引:0,他引:21  
Low-dimensional feature representation with enhanced discriminatory power is of paramount importance to face recognition (FR) systems. Most of traditional linear discriminant analysis (LDA)-based methods suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the "small sample size" (SSS) problem which is often encountered in FR tasks. In this paper, we propose a new algorithm that deals with both of the shortcomings in an efficient and cost effective manner. The proposed method is compared, in terms of classification accuracy, to other commonly used FR methods on two face databases. Results indicate that the performance of the proposed method is overall superior to those of traditional FR approaches, such as the eigenfaces, fisherfaces, and D-LDA methods.  相似文献   

2.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

3.
This paper addresses two problems in linear discriminant analysis (LDA) of face recognition. The first one is the problem of recognition of human faces under pose and illumination variations. It is well known that the distribution of face images with different pose, illumination, and face expression is complex and nonlinear. The traditional linear methods, such as LDA, will not give a satisfactory performance. The second problem is the small sample size (S3) problem. This problem occurs when the number of training samples is smaller than the dimensionality of feature vector. In turn, the within-class scatter matrix will become singular. To overcome these limitations, this paper proposes a new kernel machine-based one-parameter regularized Fisher discriminant (K1PRFD) technique. K1PRFD is developed based on our previously developed one-parameter regularized discriminant analysis method and the well-known kernel approach. Therefore, K1PRFD consists of two parameters, namely the regularization parameter and kernel parameter. This paper further proposes a new method to determine the optimal kernel parameter in RBF kernel and regularized parameter in within-class scatter matrix simultaneously based on the conjugate gradient method. Three databases, namely FERET, Yale Group B, and CMU PIE, are selected for evaluation. The results are encouraging. Comparing with the existing LDA-based methods, the proposed method gives superior results.  相似文献   

4.
吕冰  王士同 《计算机应用》2006,26(11):2781-2783
提出了一种基于核技术的求多元区别分析最佳解的K1PMDA算法,并把这一算法应用于人脸识别中。对线性人脸识别中存在两个突出问题:1、在光照、表情、姿态变化较大时,人脸图像分类是复杂的、非线性的;2、小样本问题,即当训练样本数量小于样本特征空间维数时,导致类内散布矩阵奇异。对于前一个问题,可以采用核技术提取人脸图像样本的非线性特征,对于后一个问题,采用加入一个扰动参数的扰动算法。通过对ORL,Yale Group B以及UMIST三个人脸库的实验表明,该算法是可行的、高效的。  相似文献   

5.
In this paper, we propose a nonlinear feature extraction method for regression problems to reduce the dimensionality of the input space. Previously, a feature extraction method LDAr, a regressional version of the linear discriminant analysis, was proposed. In this paper, LDAr is generalized to a nonlinear discriminant analysis by using the so-called kernel trick. The basic idea is to map the input space into a high-dimensional feature space where the variables are nonlinear transformations of input variables. Then we try to maximize the ratio of distances of samples with large differences in the target value and those with small differences in the target value in the feature space. It is well known that the distribution of face images, under a perceivable variation in translation, rotation, and scaling, is highly nonlinear and the face alignment problem is a complex regression problem. We have applied the proposed method to various regression problems including face alignment problems and achieved better performances than those of conventional linear feature extraction methods.  相似文献   

6.
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.  相似文献   

7.
本文提出了一种新的非线性特征抽取方法——基于散度差准则的隐空间特征抽取方法。该方法的主要思想就是首先利用一核函数将原始输入空间非线性变换到隐空间,然后,在该隐空间中,利用类间离散度与类内离散度之差作为鉴别准则进行特征抽取。与现有的核特征抽取方法不同,该方法不需要核函数满足Mercer定理,从而增加了核函数的选择范围。更为重要的是,由于采用了散度差作为鉴别准则,从根本上避免了传统的Fisher线性鉴别分析所遇到的小样本问题。在ORL人脸数据库和AR标准人脸库上的试验结果验证了本文方法的有效性。  相似文献   

8.
Kernel discriminant analysis (KDA) is a widely used tool in feature extraction community. However, for high-dimensional multi-class tasks such as face recognition, traditional KDA algorithms have the limitation that the Fisher criterion is nonoptimal with respect to classification rate. Moreover, they suffer from the small sample size problem. This paper presents a variant of KDA called kernel-based improved discriminant analysis (KIDA), which can effectively deal with the above two problems. In the proposed framework, origin samples are projected firstly into a feature space by an implicit nonlinear mapping. After reconstructing between-class scatter matrix in the feature space by weighted schemes, the kernel method is used to obtain a modified Fisher criterion directly related to classification error. Finally, simultaneous diagonalization technique is employed to find lower-dimensional nonlinear features with significant discriminant power. Experiments on face recognition task show that the proposed method is superior to the traditional KDA and LDA.  相似文献   

9.
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance  相似文献   

10.
Discriminant analysis is effective in extracting discriminative features and reducing dimensionality. In this paper, we propose an optimal subset-division based discrimination (OSDD) approach to enhance the classification performance of discriminant analysis technique. OSDD first divides the sample set into several subsets by using an improved stability criterion and K-means algorithm. We separately calculate the optimal discriminant vectors from each subset. Then we construct the projection transformation by combining the discriminant vectors derived from all subsets. Furthermore, we provide a nonlinear extension of OSDD, that is, the optimal subset-division based kernel discrimination (OSKD) approach. It employs the kernel K-means algorithm to divide the sample set in the kernel space and obtains the nonlinear projection transformation. The proposed approaches are applied to face and palmprint recognition, and are examined using the AR and FERET face databases and the PolyU palmprint database. The experimental results demonstrate that the proposed approaches outperform several related linear and nonlinear discriminant analysis methods.  相似文献   

11.
陈斌  张连海  牛铜  屈丹  李弼程 《自动化学报》2014,40(6):1208-1215
提出了一种基于最小分类错误(Minimum classification error,MCE)准则的线性判别分析方法(Linear discriminant analysis,LDA),并将其应用到连续语音识别中的特征变换.该方法采用非参数核密度估计方法进行数据概率分布估计;根据得到的概率分布,在最小分类错误准则下,采用基于梯度下降的线性搜索算法求解判别分析变换矩阵.利用判别分析变换矩阵对相邻帧梅尔滤波器组输出拼接的超矢量变换降维,得到时频特征.实验结果表明,与传统的MFCC特征相比,经过本文判别分析提取的时频特征其识别准确率提高了1.41%,相比于HLDA(Heteroscedastic LDA)和近似成对经验正确率准则(Approximate pairwise empirical accuracy criterion,aPEAC)判别分析方法,识别准确率分别提高了1.14%和0.83%.  相似文献   

12.
一种基于核的快速非线性鉴别分析方法   总被引:8,自引:0,他引:8  
基于“核技巧”提出的新的非线性鉴别分析方法在最小二乘意义上与基于核的Fisher鉴别分析方法等效,相应鉴别方向通过一个线性方程组得出,计算代价较小,相应分类实现极其简便.该方法的最大优点是,对训练数据进行筛选,可使构造鉴别矢量的“显著”训练模式数大大低于总训练模式数,从而使得测试集的分类非常高效;同时,设计出专门的优化算法以加速“显著”训练模式的选取.实验表明,这种非线性方法不仅具有明显的效率上的优势,且具有不低于基于核的Fisher鉴别分析方法的性能.  相似文献   

13.
尽管基于Fisher准则的线性鉴别分析被公认为特征抽取的有效方法之一,并被成功地用于人脸识别,但是由于光照变化、人脸表情和姿势变化,实际上的人脸图像分布是十分复杂的,因此,抽取非线性鉴别特征显得十分必要。为了能利用非线性鉴别特征进行人脸识别,提出了一种基于核的子空间鉴别分析方法。该方法首先利用核函数技术将原始样本隐式地映射到高维(甚至无穷维)特征空间;然后在高维特征空间里,利用再生核理论来建立基于广义Fisher准则的两个等价模型;最后利用正交补空间方法求得最优鉴别矢量来进行人脸识别。在ORL和NUST603两个人脸数据库上,对该方法进行了鉴别性能实验,得到了识别率分别为94%和99.58%的实验结果,这表明该方法与核组合方法的识别结果相当,且明显优于KPCA和Kernel fisherfaces方法的识别结果。  相似文献   

14.
This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Based on this framework, the authors propose a complete kernel Fisher discriminant analysis (CKFD) algorithm. CKFD can be used to carry out discriminant analysis in "double discriminant subspaces." The fact that, it can make full use of two kinds of discriminant information, regular and irregular, makes CKFD a more powerful discriminator. The proposed algorithm was tested and evaluated using the FERET face database and the CENPARMI handwritten numeral database. The experimental results show that CKFD outperforms other KFD algorithms.  相似文献   

15.
In this paper, we propose a new kernel discriminant analysis called kernel relevance weighted discriminant analysis (KRWDA) which has several interesting characteristics. First, it can effectively deal with the small sample size problem by using a QR decomposition on scatter matrices. Second, by incorporating a weighting function into discriminant criterion, it overcomes overemphasis on well-separated classes and hence can work under more realistic situations. Finally, using kernel theory, it handle non linearity efficiently. In order to improve performance of the proposed algorithm, we introduce two novel kernel functions and compare them with some commonly used kernels on face recognition field. We have performed multiple face recognition experiments to compare KRWDA with other dimensionality reduction methods showing that KRWDA consistently gives the best results.  相似文献   

16.
抽取最佳鉴别特征是人脸识别中的重要一步。对小样本的高维人脸图像样本,由于各种抽取非线性鉴别特征的方法均存在各自的问题,为此提出了一种求解核的Fisher非线性最佳鉴别特征的新方法,该方法首先在特征空间用类间散度阵和类内散度阵作为Fisher准则,来得到最佳非线性鉴别特征,然后针对此方法存在的病态问题,进一步在类内散度阵的零空间中求解最佳非线性鉴别矢量。基于ORL人脸数据库的实验表明,该新方法抽取的非线性最佳鉴别特征明显优于Fisher线性鉴别分析(FLDA)的线性特征和广义鉴别分析(GDA)的非线性特征。  相似文献   

17.
主成分分析算法(PCA)和线性鉴别分析算法(LDA)被广泛用于人脸识别技术中,但是PCA由于其计算复杂度高,致使人脸识别的实时性达不到要求。线性鉴别分析算法存在“小样本”和“边缘类”问题,降低了人脸识别的准确性。针对上述问题,提出使用二维主成分分析法(2DPCA)与改进的线性鉴别分析法相融合的方法。二维主成分分析法提取的特征比一维主成分分析法更丰富,并且降低了计算复杂度。改进的线性鉴别分析算法重新定义了样本类间离散度矩阵和Fisher准则,克服了传统线性鉴别分析算法存在的问题,保留了最有辨别力的信息,提高了算法的识别率。实验结果表明,该算法比主成分分析算法和线性鉴别分析算法具有更高的识别率,可以较好地用于人脸识别任务。  相似文献   

18.
Discriminative Common Vector Method With Kernels   总被引:3,自引:0,他引:3  
In some pattern recognition tasks, the dimension of the sample space is larger than the number of samples in the training set. This is known as the "small sample size problem". Linear discriminant analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample size problem is also encountered when kernel approaches are used for recognition. In this paper, we attempt to answer the question of "How should one choose the optimal projection vectors for feature extraction in the small sample size case?" Based on our findings, we propose a new method called the kernel discriminative common vector method. In this method, we first nonlinearly map the original input space to an implicit higher dimensional feature space, in which the data are hoped to be linearly separable. Then, the optimal projection vectors are computed in this transformed space. The proposed method yields an optimal solution for maximizing a modified Fisher's linear discriminant criterion, discussed in the paper. Thus, under certain conditions, a 100% recognition rate is guaranteed for the training set samples. Experiments on test data also show that, in many situations, the generalization performance of the proposed method compares favorably with other kernel approaches  相似文献   

19.
It is well-known that the applicability of both linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) to high-dimensional pattern classification tasks such as face recognition (FR) often suffers from the so-called “small sample size” (SSS) problem arising from the small number of available training samples compared to the dimensionality of the sample space. In this paper, we propose a new QDA like method that effectively addresses the SSS problem using a regularization technique. Extensive experimentation performed on the FERET database indicates that the proposed methodology outperforms traditional methods such as Eigenfaces, direct QDA and direct LDA in a number of SSS setting scenarios.  相似文献   

20.
It is generally believed that quadratic discriminant analysis (QDA) can better fit the data in practical pattern recognition applications compared to linear discriminant analysis (LDA) method. This is due to the fact that QDA relaxes the assumption made by LDA-based methods that the covariance matrix for each class is identical. However, it still assumes that the class conditional distribution is Gaussian which is usually not the case in many real-world applications. In this paper, a novel kernel-based QDA method is proposed to further relax the Gaussian assumption by using the kernel machine technique. The proposed method solves the complex pattern recognition problem by combining the QDA solution and the kernel machine technique, and at the same time, tackles the so-called small sample size problem through a regularized estimation of the covariance matrix. Extensive experimental results indicate that the proposed method is a more sophisticated solution outperforming many traditional kernel-based learning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号