共查询到20条相似文献,搜索用时 0 毫秒
1.
Itzik Pima 《Pattern recognition》2004,37(9):1945-1948
This paper studies regularized discriminant analysis (RDA) in the context of face recognition. We check RDA sensitivity to different photometric preprocessing methods and compare its performance to other classifiers. Our study shows that RDA is better able to extract the relevant discriminatory information from training data than the other classifiers tested, thus obtaining a lower error rate. Moreover, RDA is robust under various lighting conditions while the other classifiers perform badly when no photometric method is applied. 相似文献
2.
Abhishek Sharma Murad Al HajJonghyun Choi Larry S. DavisDavid W. Jacobs 《Computer Vision and Image Understanding》2012,116(11):1095-1110
We propose a novel pose-invariant face recognition approach which we call Discriminant Multiple Coupled Latent Subspace framework. It finds the sets of projection directions for different poses such that the projected images of the same subject in different poses are maximally correlated in the latent space. Discriminant analysis with artificially simulated pose errors in the latent space makes it robust to small pose errors caused due to a subject’s incorrect pose estimation. We do a comparative analysis of three popular latent space learning approaches: Partial Least Squares (PLSs), Bilinear Model (BLM) and Canonical Correlational Analysis (CCA) in the proposed coupled latent subspace framework. We experimentally demonstrate that using more than two poses simultaneously with CCA results in better performance. We report state-of-the-art results for pose-invariant face recognition on CMU PIE and FERET and comparable results on MultiPIE when using only four fiducial points for alignment and intensity features. 相似文献
3.
Cairong Zhao Zhihui Lai Duoqian Miao Zhihua Wei Caihui Liu 《Neural computing & applications》2014,24(7-8):1697-1706
This paper develops a supervised discriminant technique, called graph embedding discriminant analysis (GEDA), for dimensionality reduction of high-dimensional data in small sample size problems. GEDA can be seen as a linear approximation of a multimanifold-based learning framework in which nonlocal property is taken into account besides the marginal property and local property. GEDA seeks to find a set of perfect projections that not only can impact the samples of intraclass and maximize the margin of interclass, but also can maximize the nonlocal scatter at the same time. This characteristic makes GEDA more intuitive and more powerful than linear discriminant analysis (LDA) and marginal fisher analysis (MFA). The proposed method is applied to face recognition and is examined on the Yale, ORL and AR face image databases. The experimental results show that GEDA consistently outperforms LDA and MFA when the training sample size per class is small. 相似文献
4.
The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization. 相似文献
5.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA. 相似文献
6.
Incremental linear discriminant analysis for face recognition. 总被引:3,自引:0,他引:3
Haitao Zhao Pong Chi Yuen 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2008,38(1):210-221
Dimensionality reduction methods have been successfully employed for face recognition. Among the various dimensionality reduction algorithms, linear (Fisher) discriminant analysis (LDA) is one of the popular supervised dimensionality reduction methods, and many LDA-based face recognition algorithms/systems have been reported in the last decade. However, the LDA-based face recognition systems suffer from the scalability problem. To overcome this limitation, an incremental approach is a natural solution. The main difficulty in developing the incremental LDA (ILDA) is to handle the inverse of the within-class scatter matrix. In this paper, based on the generalized singular value decomposition LDA (LDA/GSVD), we develop a new ILDA algorithm called GSVD-ILDA. Different from the existing techniques in which the new projection matrix is found in a restricted subspace, the proposed GSVD-ILDA determines the projection matrix in full space. Extensive experiments are performed to compare the proposed GSVD-ILDA with the LDA/GSVD as well as the existing ILDA methods using the face recognition technology face database and the Carneggie Mellon University Pose, Illumination, and Expression face database. Experimental results show that the proposed GSVD-ILDA algorithm gives the same performance as the LDA/GSVD with much smaller computational complexity. The experimental results also show that the proposed GSVD-ILDA gives better classification performance than the other recently proposed ILDA algorithms. 相似文献
7.
Haifeng Hu 《Pattern recognition》2008,41(6):2045-2054
In this paper, we propose a new linear subspace analysis algorithm, called orthogonal neighborhood preserving discriminant analysis (ONPDA). Given a set of data points in the ambient space, a weight matrix is firstly built which describes the relationship between the data points. Then optimal between-class scatter matrix and within-class scatter matrix are defined such that the neighborhood structure can be preserved. In order to improve the discriminating power, a new method is presented for orthogonalizing the basis eigenvectors. We evaluate the performance of the proposed algorithm for face recognition with the use of different databases. Consistent and promising results demonstrate the effectiveness of our algorithm. 相似文献
8.
In this paper, we propose a new kernel discriminant analysis called kernel relevance weighted discriminant analysis (KRWDA)
which has several interesting characteristics. First, it can effectively deal with the small sample size problem by using
a QR decomposition on scatter matrices. Second, by incorporating a weighting function into discriminant criterion, it overcomes
overemphasis on well-separated classes and hence can work under more realistic situations. Finally, using kernel theory, it
handle non linearity efficiently. In order to improve performance of the proposed algorithm, we introduce two novel kernel
functions and compare them with some commonly used kernels on face recognition field. We have performed multiple face recognition
experiments to compare KRWDA with other dimensionality reduction methods showing that KRWDA consistently gives the best results. 相似文献
9.
Xiaohua Gu Author VitaeWeiguo GongAuthor Vitae Liping Yang Author Vitae 《Neurocomputing》2011,74(17):3036-3042
This paper proposes a regularized locality preserving discriminant analysis (RLPDA) approach for facial feature extraction and recognition. The RLPDA approach decomposes the eigenspace of the locality preserving within-class scatter matrix into three subspaces, i.e., the face space, the noise space and the null space, and then regularizes the three subspaces differently according to their predicted eigenvalues. As a result, the proposed approach integrates discriminative information in all of the three subspaces, de-emphasizes the effect of the eigenvectors corresponding to the small eigenvalues, and meanwhile suppresses the small sample size problem. Extensive experiments on ORL face database, FERET face subset and UMIST face database illustrate the effectiveness of the proposed approach. 相似文献
10.
Hong HuangAuthor Vitae Jianwei LiAuthor VitaeJiamin LiuAuthor Vitae 《Future Generation Computer Systems》2012,28(1):244-253
An improved manifold learning method, called enhanced semi-supervised local Fisher discriminant analysis (ESELF), for face recognition is proposed. Motivated by the fact that statistically uncorrelated and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on eigen decomposition. Experiments on synthetic data and AT&T, Yale and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The experimental results and comparisons demonstrate the effectiveness of the proposed method. 相似文献
11.
Xiao-ning Song Yu-jie Zheng Xiao-jun Wu Xi-bei Yang Jing-yu Yang 《Applied Soft Computing》2010,10(1):208-214
In this paper, some studies have been made on the essence of fuzzy linear discriminant analysis (F-LDA) algorithm and fuzzy support vector machine (FSVM) classifier, respectively. As a kernel-based learning machine, FSVM is represented with the fuzzy membership function while realizing the same classification results with that of the conventional pair-wise classification. It outperforms other learning machines especially when unclassifiable regions still remain in those conventional classifiers. However, a serious drawback of FSVM is that the computation requirement increases rapidly with the increase of the number of classes and training sample size. To address this problem, an improved FSVM method that combines the advantages of FSVM and decision tree, called DT-FSVM, is proposed firstly. Furthermore, in the process of feature extraction, a reformative F-LDA algorithm based on the fuzzy k-nearest neighbors (FKNN) is implemented to achieve the distribution information of each original sample represented with fuzzy membership grade, which is incorporated into the redefinition of the scatter matrices. In particular, considering the fact that the outlier samples in the patterns may have some adverse influence on the classification result, we developed a novel F-LDA algorithm using a relaxed normalized condition in the definition of fuzzy membership function. Thus, the classification limitation from the outlier samples is effectively alleviated. Finally, by making full use of the fuzzy set theory, a complete F-LDA (CF-LDA) framework is developed by combining the reformative F-LDA (RF-LDA) feature extraction method and DT-FSVM classifier. This hybrid fuzzy algorithm is applied to the face recognition problem, extensive experimental studies conducted on the ORL and NUST603 face images databases demonstrate the effectiveness of the proposed algorithm. 相似文献
12.
This paper develops a new image feature extraction and recognition method coined two-dimensional linear discriminant analysis (2DLDA). 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image. Also, 2DLDA suggests a feature selection strategy to select the most discriminative features from the corner. 2DLDA is tested and evaluated using the AT&T face database. The experimental results show 2DLDA is more effective and computationally more efficient than the current LDA algorithms for face feature extraction and recognition. 相似文献
13.
Lijun YanAuthor Vitae Shu-Chuan ChuAuthor Vitae 《Future Generation Computer Systems》2012,28(1):232-235
A novel image classification algorithm named Adaptively Weighted Sub-directional Two-Dimensional Linear Discriminant Analysis (AWS2DLDA) is proposed in this paper. AWS2DLDA can extract the directional features of images in the frequency domain, and it is applied to face recognition. Some experiments are conducted to demonstrate the effectiveness of the proposed method. Experimental results confirm that the recognition rate of the proposed system is higher than the other popular algorithms. 相似文献
14.
To address two problems, namely nonlinear problem and singularity problem, of linear discriminant analysis (LDA) approach in face recognition, this paper proposes a novel kernel machine-based rank-lifting regularized discriminant analysis (KRLRDA) method. A rank-lifting theorem is first proven using linear algebraic theory. Combining the rank-lifting strategy with three-to-one regularization technique, the complete regularized methodology is developed on the within-class scatter matrix. The proposed regularized scheme not only adjusts the projection directions but tunes their corresponding weights as well. Moreover, it is shown that the final regularized within-class scatter matrix approaches to the original one as the regularized parameter tends to zero. Two public available databases, namely FERET and CMU PIE face databases, are selected for evaluations. Compared with some existing kernel-based LDA methods, the proposed KRLRDA approach gives superior performance. 相似文献
15.
Xiao-Zhang Liu Chen-Guang Zhang 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2016,20(3):831-840
This paper builds the concept of kernel cuboid, and proposes a new kernel-based image feature extraction method for face recognition. The proposed method deals with a face image in a block-wise manner, and independently performs kernel discriminant analysis in every block set, using kernel cuboid instead of kernel matrix. Experimental results on the ORL and UMIST face databases show the effectiveness and scalability of the proposed method. 相似文献
16.
人脸识别是模式识别中重要的研究内容,具有广泛的应用前景。为了进一步提高人脸识别中线性鉴别方法的鲁棒性,提出了一种基于列最近邻的线性鉴别方法(CBLDA)。CBLDA为每一类找到一个投影矩阵,使得人脸图像中的每一列经过投影矩阵投影后,能够更靠近类内列最近邻同时离类间列最近邻越远。当测试样本与经过其类别的投影矩阵投影后能够得到更有利于分类的结果。CBLDA类似于分块或者子图的方法,选择最近邻列作为分块的策略的主要优点:(1)列是图像的固有尺寸,会随分辨率的变化而变化,因此不需要决定分块的大小;(2)人脸具有对称性,对列求得类内列最近邻可以较好克服一些左右姿态和光照变化的影响,提高算法的鲁棒性。为了验证CBLDA的有效性,在ORL和FERET人脸数据库中与2D-LDA、2D-LPP和2D-LGEDA等二维算法进行了对比实验,结果表明CBLDA在识别率有大幅的提升,证明了算法的有效性。 相似文献
17.
Linear discriminant analysis (LDA) is one of the most popular techniques for extracting features in face recognition. LDA captures the global geometric structure. However, local geometric structure has recently been shown to be effective for face recognition. In this paper, we propose a novel feature extraction algorithm which integrates both global and local geometric structures. We first cast LDA as a least square problem based on the spectral regression, then regularization technique is used to model the global and local geometric structures. Furthermore, we impose penalty on parameters to tackle the singularity problem and design an efficient model selection algorithm to choose the optimal tuning parameter which balances the tradeoff between the global and local structures. Experimental results on four well-known face data sets show that the proposed integration framework is competitive with traditional face recognition algorithms, which use either global or local structure only. 相似文献
18.
增强的独立分量分析(EICA)是一种基于样本整体特征的无监督特征抽取方法,并没有考虑样本的局部特征,因此EICA不利于处理人脸识别这类非线性问题的。无监督鉴别投影技术(UDP)用于高维数据压缩,其基本思想是寻找一组有效的投影方向,使得样本投影后,局部散度最小同时非局部散度最大。UDP同时考虑到样本的局部特征和非局部特征,能够反映样本内在的数据关系,因此UDP能够对样本有效地分类。提出了一种增强的无监督人脸鉴别技术,该方法结合了EICA和UDP的优点,能够:(1)反映样本高阶统计特征;(2)发掘样本内在的几何结构,从而有利于分类。在Yale人脸库和FERET人脸库上的实验验证了该算法的有效性。 相似文献
19.
In this paper we address the problem of robust face recognition by formulating the pattern recognition task as a problem of robust estimation. Using a fundamental concept that in general, patterns from a single object class lie on a linear subspace (Barsi and Jacobs, 2003 [1]), we develop a linear model representing a probe image as a linear combination of class specific galleries. In the presence of noise, the well-conditioned inverse problem is solved using the robust Huber estimation and the decision is ruled in favor of the class with the minimum reconstruction error. The proposed Robust Linear Regression Classification (RLRC) algorithm is extensively evaluated for two important cases of robustness i.e. illumination variations and random pixel corruption. Illumination invariant face recognition is demonstrated on three standard databases under exemplary evaluation protocols reported in the literature. Comprehensive comparative analysis with the state-of-art illumination tolerant approaches indicates a comparable performance index for the proposed RLRC algorithm. The efficiency of the proposed approach in the presence of severe random noise is validated under several exemplary noise models such as dead-pixel problem, salt and pepper noise, speckle noise and Additive White Gaussian Noise (AWGN). The RLRC algorithm is found to be favorable compared with the benchmark generative approaches. 相似文献