首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper addresses the problem of face recognition using independent component analysis (ICA). More specifically, we are going to address two issues on face representation using ICA. First, as the independent components (ICs) are independent but not orthogonal, images outside a training set cannot be projected into these basis functions directly. In this paper, we propose a least-squares solution method using Householder Transformation to find a new representation. Second, we demonstrate that not all ICs are useful for recognition. Along this direction, we design and develop an IC selection algorithm to find a subset of ICs for recognition. Three public available databases, namely, MIT AI Laboratory, Yale University and Olivette Research Laboratory, are selected to evaluate the performance and the results are encouraging.  相似文献   

2.
Face recognition with one training image per person   总被引:17,自引:0,他引:17  
At present there are many methods that could deal well with frontal view face recognition. However, most of them cannot work well when there is only one training image per person. In this paper, an extension of the eigenface technique, i.e. projection-combined principal component analysis, (PC)2A, is proposed. (PC)2A combines the original face image with its horizontal and vertical projections and then performs principal component analysis on the enriched version of the image. It requires less computational cost than the standard eigenface technique and experimental results show that on a gray-level frontal view face database where each person has only one training image, (PC)2A achieves 3–5% higher accuracy than the standard eigenface technique through using 10–15% fewer eigenfaces.  相似文献   

3.
The well-known eigenface method uses an eigenface set obtained from principal component analysis. However, the single eigenface set is not enough to represent the complicated face images with large variations of poses and/or illuminations. To overcome this weakness, we propose a second-order mixture-of-eigenfaces method that combines the second-order eigenface method (ISO MPG m5750, Noordwijkerhout, March 2000) and the mixture-of-eigenfaces method (a.k.a. Gaussian mixture model (Proceedings IJCNN2001, 2001). In this method, we use a couple of mixtures of multiple eigenface sets: one is a mixture of multiple approximate eigenface sets for face images and another is a mixture of multiple residual eigenface sets for residual face images. Each mixture of multiple eigenface sets has been obtained from expectation maximization learning consecutively. Based on two mixture of multiple eigenface sets, each face image is represented by a couple of feature vectors obtained by projecting the face image onto a selected approximate eigenface set and then by projecting the residual face image onto a selected residual eigenface set. Recognition is performed by the distance in the feature space between the input image and the template image stored in the face database. Simulation results show that the proposed second-order mixture-of-eigenfaces method is best for face images with illumination variations and the mixture-of-eigenfaces method is best for the face images with pose variations in terms of average of the normalized modified retrieval rank and false identification rate.  相似文献   

4.
The paper considers partial least squares (PLS) as a new dimension reduction technique for the feature vector to overcome the small sample size problem in face recognition. Principal component analysis (PCA), a conventional dimension reduction method, selects the components with maximum variability, irrespective of the class information. So PCA does not necessarily extract features that are important for the discrimination of classes. PLS, on the other hand, constructs the components so that the correlation between the class variable and themselves is maximized. Therefore PLS components are more predictive than PCA components in classification. The experimental results on Manchester and ORL databases show that PLS is to be preferred over PCA when classification is the goal and dimension reduction is needed.  相似文献   

5.
In this study, we are concerned with face recognition using fuzzy fisherface approach and its fuzzy set based augmentation. The well-known fisherface method is relatively insensitive to substantial variations in light direction, face pose, and facial expression. This is accomplished by using both principal component analysis and Fisher's linear discriminant analysis. What makes most of the methods of face recognition (including the fisherface approach) similar is an assumption about the same level of typicality (relevance) of each face to the corresponding class (category). We propose to incorporate a gradual level of assignment to class being regarded as a membership grade with anticipation that such discrimination helps improve classification results. More specifically, when operating on feature vectors resulting from the PCA transformation we complete a Fuzzy K-nearest neighbor class assignment that produces the corresponding degrees of class membership. The comprehensive experiments completed on ORL, Yale, and CNU (Chungbuk National University) face databases show improved classification rates and reduced sensitivity to variations between face images caused by changes in illumination and viewing directions. The performance is compared vis-à-vis other commonly used methods, such as eigenface and fisherface.  相似文献   

6.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

7.
In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.  相似文献   

8.
The paper is concerned with face recognition using the embedded hidden Markov model (EHMM) with second-order block-specific observations. The proposed method partitions a face image into a 2-D lattice type, composed of many blocks. Each block is represented by the second-order block-specific observation that consists of a combination of first- and second-order feature vectors. The first-order (or second-order) feature vector is obtained by projecting the original (or residual) block image onto the first (or second) basis vector that is obtained block-specifically by applying the PCA to a set of original (or residual) block images. A sequence of feature vectors obtained from the top-to-bottom and the left-to-right scanned blocks are used as an observation sequence to train EHMM. The EHMM models the face image in a hierarchical manner as follows. Several super states are used to model the vertical facial features such as the forehead, eyes, nose, mouth, and chin, and several states in the super state are used to model the localized features in a vertical face feature. Recognition is performed by identifying the person of the model that provides the highest value of observation probability. Experimental results show that the proposed recognition method outperforms many existing methods, such as the second-order eigenface method, the EHMM with DCT observations, and the second-order eigenface method using a confidence factor in terms of average of the normalized modified retrieval rank and false identification rate.  相似文献   

9.
人脸识别是模式识别领域中一个相当困难而又有理论意义和实际价值的研究课题。本文在传统的特征脸方法的理论基础上提出一种改进的特征脸方法,就是把人脸图像分成上中下三个部分,分别应用特征脸方法,在识别计算距离时赋予不同的权值,最后确定综合距离最小的人脸图像。把这种方法和传统特征脸方法进行了对比实验,结果证明了该方法的可行性和良好的抗畸变能力。  相似文献   

10.
NNSRM is an implementation of the structural risk minimization (SRM) principle using the nearest neighbor (NN) rule, and linear discriminant analysis (LDA) is a dimension-reducing method, which is usually used in classifications. This paper combines the two methods for face recognition. We first project the face images into a PCA subspace, then project the results into a much lower-dimensional LDA subspace, and then use an NNSRM classifier to recognize them in the LDA subspace. Experimental results demonstrate that the combined method can achieve a better performance than NN by selecting different distances and a comparable performance with SVM but costing less computational time.
Jiaxin Wang (Corresponding author)Email:

Danian Zheng   received his Bachelor degree in Computer Science and Technology in 2002 from Tsinghua University, Beijing, China. He received his Master degree and Doctoral degree in Computer Science and Technology in 2006 from Tsinghua University. He is currently a researcher in Fujitsu R&D Center Co. Ltd, Beijing, China. His research interests are mainly in the areas of support vector machines, kernel methods and their applications. Meng Na   received her Bachelor degree in Computer Science and Technology in 2003 from Northeastern, China. Since 2003 she has been pursuing the Master degree and the Doctoral degree at the Department of Computer Science and Technology at Tsinghua University. Her research interests are in the area of image processing, pattern recognition, and virtual human. Jiaxin Wang   received his Bachelor degree in Automatic Control in 1965 from Beijing University of Aeronautics and Astronautics, his Master degree in Computer Science and Technology in 1981 from Tsinghua University, Beijing, China, and his Doctoral degree in 1996 from Engineering Faculty of Vrije Universiteit Brussel, Belgium. He is currently a professor of Department of Computer Science and Technology, Tsinghua University. His research interests are in the areas of artificial intelligence, intelligent control and robotics, machine learning, pattern recognition, image processing and virtual reality.   相似文献   

11.
应用复小波和独立成分分析的人脸识别   总被引:1,自引:1,他引:1  
柴智  刘正光 《计算机应用》2010,30(7):1863-1866
结合双树复小波变换(DT-CWT)和独立成分分析(ICA)提出了一种人脸识别新方法。该方法首先应用双树复小波变换提取图像的特征向量,接着通过主成分分析(PCA)降低特征向量的维数,在此基础上应用独立成分分析提取统计上独立的特征向量,然后基于相关系数的分类器对特征向量进行分类。双树复小波变换具有方向与尺度选择性,并能有效的保持图像的频域信息,其与独立成分分析相结合提取的特征具有良好的分类性能。在ORL和AR人脸图像数据库上进行算法验证的结果表明该方法的有效性。  相似文献   

12.
针对化工过程监测数据复杂、非线性等特点,本文将一种新的降维算法一核熵成分分析算法应用到化工过程监控。与其他的多元统计分析方法相比,核熵成分分析算法可以保证数据降维过程中的信息损失最小从而建立更加可靠的统计模型,进而提高故障检测的检出率。与核主成分分析相似,核熵成分分析也是将数据映射到一个高维空间,在高维空间中进行主元分析,不同之处是KECA在选取主元时采用了信息保有量较大的主元,使得数据在降维后的信息损失量更少。本文使用某石化企业的润滑油重质过程的数据测试算法监控效果,核熵成分分析算法的故障检出率为98.2%,比核主成分分析算法(69.706%)要高。实验结果显示,核熵成分分析算法的化工过程监控效果优于核主成分分析算法。  相似文献   

13.
提出一种基于张量代数的核主成分分析方法来进行特征提取。该方法可以有效避免维数过高导致计算消耗过大,并合理利用已知训练样本的类别信息。算法先对每一类目标使用核主成分分析手段以形成其各自的特征空间;再通过张量积将所有的特征映射到一高维线性空间;随后直接在此空间上进行线性的主成分分析,即可构造出了适宜的特征空间。其既能有效反映各类样本特征,又能比直接使用核主成分的方法极大降低计算所需的消耗。目标识别实验表明,该方法与直接使用核主成分方法构造特征空间的方法进行比较,在保持识别效果的前提下,可以明显降低计算的消耗与存储的需求。  相似文献   

14.
刘嵩  罗敏  张国平 《计算机应用》2012,32(5):1404-1406
为了提高人脸识别技术的实用性,结合人脸镜像对称性和核主成分分析提出了一种新的人脸识别方法。首先利用小波变换压缩人脸图像数据,获取小波分解的低频分量,再通过镜像变换得到镜像偶对称图像和镜像奇对称图像,然后分别对奇偶对称图像进行核主成分分析提取奇偶特征,并且通过加权因子对奇偶特征进行融合,最后采用最近邻分类器分类。基于ORL人脸数据库的实验结果表明:该算法增大了样本容量,在一定程度上克服了光照、姿态的不利因素,提高了人脸识别率。  相似文献   

15.
Inspired by the conviction that the successful model employed for face recognition [M. Turk, A. Pentland, Eigenfaces for recognition, J. Cogn. Neurosci. 3(1) (1991) 71-86] should be extendable for object recognition [H. Murase, S.K. Nayar, Visual learning and recognition of 3-D objects from appearance, International J. Comput. Vis. 14(1) (1995) 5-24], in this paper, a new technique called two-dimensional principal component analysis (2D-PCA) [J. Yang et al., Two-dimensional PCA: a new approach to appearance based face representation and recognition, IEEE Trans. Patt. Anal. Mach. Intell. 26(1) (2004) 131-137] is explored for 3D object representation and recognition. 2D-PCA is based on 2D image matrices rather than 1D vectors so that the image matrix need not be transformed into a vector prior to feature extraction. Image covariance matrix is directly computed using the original image matrices, and its eigenvectors are derived for feature extraction. The experimental results indicate that the 2D-PCA is computationally more efficient than conventional PCA (1D-PCA) [H. Murase, S.K. Nayar, Visual learning and recognition of 3-D objects from appearance, International J. Comput. Vis. 14(1) (1995) 5-24]. It is also revealed through experimentation that the proposed method is more robust to noise and occlusion.  相似文献   

16.
Block-wise 2D kernel PCA/LDA for face recognition   总被引:1,自引:0,他引:1  
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.  相似文献   

17.
This paper studies regularized discriminant analysis (RDA) in the context of face recognition. We check RDA sensitivity to different photometric preprocessing methods and compare its performance to other classifiers. Our study shows that RDA is better able to extract the relevant discriminatory information from training data than the other classifiers tested, thus obtaining a lower error rate. Moreover, RDA is robust under various lighting conditions while the other classifiers perform badly when no photometric method is applied.  相似文献   

18.
Although 2DLDA algorithm obtains higher recognition accuracy, a vital unresolved problem of 2DLDA is that it needs huge feature matrix for the task of face recognition. To overcome this problem, this paper presents an efficient approach for face image feature extraction, namely, (2D)2LDA method. Experimental results on ORL and Yale database show that the proposed method obtains good recognition accuracy despite having less number of coefficients.  相似文献   

19.
Face recognition is a challenging task in computer vision and pattern recognition. It is well-known that obtaining a low-dimensional feature representation with enhanced discriminatory power is of paramount importance to face recognition. Moreover, recent research has shown that the face images reside on a possibly nonlinear manifold. Thus, how to effectively exploit the hidden structure is a key problem that significantly affects the recognition results. In this paper, we propose a new unsupervised nonlinear feature extraction method called spectral feature analysis (SFA). The main advantages of SFA over traditional feature extraction methods are: (1) SFA does not suffer from the small-sample-size problem; (2) SFA can extract discriminatory information from the data, and we show that linear discriminant analysis can be subsumed under the SFA framework; (3) SFA can effectively discover the nonlinear structure hidden in the data. These appealing properties make SFA very suitable for face recognition tasks. Experimental results on three benchmark face databases illustrate the superiority of SFA over traditional methods.  相似文献   

20.
针对拉普拉斯特征映射(LE)只能保持局部近邻信息,对新测试点无法描述的不足,提出一种基于二维核主成分分析的拉普拉斯特征映射算法(2D-KPCA LE)。与核二维主成分分析算法(K2DPCA)不同,该算法首先对训练样本空间进行二维主成分分析(2DPCA),在保留样本空间结构信息的同时通过去相关性得到低秩的投影特征矩阵;然后用核主成分分析法(KPCA)提取全局非线性特征;由于其核函数需要大量存储空间,再用拉普拉斯特征映射(LE)进行降维。在ORL和FERET人脸数据库中的仿真实验结果表明,基于2D-KPCA的拉普拉斯特征映射算法不但可以有效处理复杂的非线性特征,又可以降低算法复杂度,提高流形学习的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号