首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear discriminant regression classification (LDRC) was presented recently in order to boost the effectiveness of linear regression classification (LRC). LDRC aims to find a subspace for LRC where LRC can achieve a high discrimination for classification. As a discriminant analysis algorithm, however, LDRC considers an equal importance of each training sample and ignores the different contributions of these samples to learn the discriminative feature subspace for classification. Motivated by the fact that some training samples are more effectual in learning the low-dimensional feature space than other samples, in this paper, we propose an adaptive linear discriminant regression classification (ALDRC) algorithm by taking special consideration of different contributions of the training samples. Specifically, ALDRC makes use of different weights to characterize the different contributions of the training samples and utilizes such weighting information to calculate the between-class and the within-class reconstruction errors, and then ALDRC seeks to find an optimal projection matrix that can maximize the ratio of the between-class reconstruction error over the within-class reconstruction error. Extensive experiments carried out on the AR, FERET and ORL face databases demonstrate the effectiveness of the proposed method.  相似文献   

2.
Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could not well classify the samples that distribute around intersections. Moreover, the LRC could not perform well at the situation of severe lighting variations. This paper proposes a new classification method, kernel linear regression classification (KLRC), based on LRC and the kernel trick. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC. KLRC implicitly maps the data into a high-dimensional kernel space by using the nonlinear mapping determined by a kernel function. Through this mapping, KLRC is able to make the data more linearly separable and can perform well for face recognition with varying lighting. For comparison, we conduct on three standard databases under some evaluation protocols. The proposed methodology not only outperforms LRC but also takes the better performance than typical kernel methods such as kernel linear discriminant analysis and kernel principal component analysis.  相似文献   

3.
Classification using the l 2-norm-based representation is usually computationally efficient and is able to obtain high accuracy in the recognition of faces. Among l 2-norm-based representation methods, linear regression classification (LRC) and collaborative representation classification (CRC) have been widely used. LRC and CRC produce residuals in very different ways, but they both use residuals to perform classification. Therefore, by combining the residuals of these two methods, better performance for face recognition can be achieved. In this paper, a simple weighted sum based fusion scheme is proposed to integrate LRC and CRC for more accurate recognition of faces. The rationale of the proposed method is analyzed. Face recognition experiments illustrate that the proposed method outperforms LRC and CRC.  相似文献   

4.
Incremental linear discriminant analysis for face recognition.   总被引:3,自引:0,他引:3  
Dimensionality reduction methods have been successfully employed for face recognition. Among the various dimensionality reduction algorithms, linear (Fisher) discriminant analysis (LDA) is one of the popular supervised dimensionality reduction methods, and many LDA-based face recognition algorithms/systems have been reported in the last decade. However, the LDA-based face recognition systems suffer from the scalability problem. To overcome this limitation, an incremental approach is a natural solution. The main difficulty in developing the incremental LDA (ILDA) is to handle the inverse of the within-class scatter matrix. In this paper, based on the generalized singular value decomposition LDA (LDA/GSVD), we develop a new ILDA algorithm called GSVD-ILDA. Different from the existing techniques in which the new projection matrix is found in a restricted subspace, the proposed GSVD-ILDA determines the projection matrix in full space. Extensive experiments are performed to compare the proposed GSVD-ILDA with the LDA/GSVD as well as the existing ILDA methods using the face recognition technology face database and the Carneggie Mellon University Pose, Illumination, and Expression face database. Experimental results show that the proposed GSVD-ILDA algorithm gives the same performance as the LDA/GSVD with much smaller computational complexity. The experimental results also show that the proposed GSVD-ILDA gives better classification performance than the other recently proposed ILDA algorithms.  相似文献   

5.
6.
A novel image classification algorithm named Adaptively Weighted Sub-directional Two-Dimensional Linear Discriminant Analysis (AWS2DLDA) is proposed in this paper. AWS2DLDA can extract the directional features of images in the frequency domain, and it is applied to face recognition. Some experiments are conducted to demonstrate the effectiveness of the proposed method. Experimental results confirm that the recognition rate of the proposed system is higher than the other popular algorithms.  相似文献   

7.
Linear discriminant analysis (LDA) is one of the most popular techniques for extracting features in face recognition. LDA captures the global geometric structure. However, local geometric structure has recently been shown to be effective for face recognition. In this paper, we propose a novel feature extraction algorithm which integrates both global and local geometric structures. We first cast LDA as a least square problem based on the spectral regression, then regularization technique is used to model the global and local geometric structures. Furthermore, we impose penalty on parameters to tackle the singularity problem and design an efficient model selection algorithm to choose the optimal tuning parameter which balances the tradeoff between the global and local structures. Experimental results on four well-known face data sets show that the proposed integration framework is competitive with traditional face recognition algorithms, which use either global or local structure only.  相似文献   

8.
In the past several decades, classifier design has attracted much attention. Inspired by the locality preserving idea of manifold learning, here we give a local linear regression (LLR) classifier. The proposed classifier consists of three steps: first, search k nearest neighbors of a pointed sample from each special class, respectively; second, reconstruct the pointed sample using the k nearest neighbors from each special class, respectively; and third, classify the test sample according to the minimum reconstruction error. The experimental results on the ETH80 database, the CENPAMI handwritten number database and the FERET face image database demonstrate that LLR works well, leading to promising image classification performance.  相似文献   

9.
The paper presents a simple but efficient novel H-eigenface (Hybrid-eigenface) method for pose invariant face recognition ranging from frontal to profile view. H-eigenfaces are entirely new basis for face image representation under different poses and are used for virtual frontal view synthesis. The proposed method is based on the fact that face samples of same person under different poses are similar in terms of the combination pattern of facial features. H-eigenfaces exploit this fact and thus two H-eigenfaces under different poses capture same features of the face. Thereby providing a compact view-based subspace, which can be further used to generate virtual frontal view from inputted non-frontal face image using least square projection technique. The use of proposed methodology on FERET and ORL face database shows an impressive improvement in recognition accuracy and a distinct reduction in online computation when compared to global linear regression method.  相似文献   

10.
In this paper an efficient feature extraction method named as locally linear discriminant embedding (LLDE) is proposed for face recognition. It is well known that a point can be linearly reconstructed by its neighbors and the reconstruction weights are under the sum-to-one constraint in the classical locally linear embedding (LLE). So the constrained weights obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings and translations. The latter two are introduced to the proposed method to strengthen the classification ability of the original LLE. The data with different class labels are translated by the corresponding vectors and those belonging to the same class are translated by the same vector. In order to cluster the data with the same label closer, they are also rescaled to some extent. So after translation and rescaling, the discriminability of the data will be improved significantly. The proposed method is compared with some related feature extraction methods such as maximum margin criterion (MMC), as well as other supervised manifold learning-based approaches, for example ensemble unified LLE and linear discriminant analysis (En-ULLELDA), locally linear discriminant analysis (LLDA). Experimental results on Yale and CMU PIE face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.  相似文献   

11.
Linear discriminant analysis (LDA) often suffers from the small sample size problem when dealing with high-dimensional face data. Random subspace can effectively solve this problem by random sampling on face features. However, it remains a problem how to construct an optimal random subspace for discriminant analysis and perform the most efficient discriminant analysis on the constructed random subspace. In this paper, we propose a novel framework, random discriminant analysis (RDA), to handle this problem. Under the most suitable situation of the principal subspace, the optimal reduced dimension of the face sample is discovered to construct a random subspace where all the discriminative information in the face space is distributed in the two principal subspaces of the within-class and between-class matrices. Then we apply Fisherface and direct LDA, respectively, to the two principal subspaces for simultaneous discriminant analysis. The two sets of discriminant analysis features from dual principal subspaces are first combined at the feature level, and then all the random subspaces are further integrated at the decision level. With the discriminating information fusion at the two levels, our method can take full advantage of useful discriminant information in the face space. Extensive experiments on different face databases demonstrate its performance.  相似文献   

12.
This paper studies regularized discriminant analysis (RDA) in the context of face recognition. We check RDA sensitivity to different photometric preprocessing methods and compare its performance to other classifiers. Our study shows that RDA is better able to extract the relevant discriminatory information from training data than the other classifiers tested, thus obtaining a lower error rate. Moreover, RDA is robust under various lighting conditions while the other classifiers perform badly when no photometric method is applied.  相似文献   

13.
This paper develops a new image feature extraction and recognition method coined two-dimensional linear discriminant analysis (2DLDA). 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image. Also, 2DLDA suggests a feature selection strategy to select the most discriminative features from the corner. 2DLDA is tested and evaluated using the AT&T face database. The experimental results show 2DLDA is more effective and computationally more efficient than the current LDA algorithms for face feature extraction and recognition.  相似文献   

14.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

15.
针对三维人脸识别中受光照、姿态、表情等变化而影响识别性能的问题,提出了一种原型超平面学习算法。利用SVM将弱标记数据集中的每个样本表示为一个原型超平面中层特征,使用学习组合系数从未标记的通用数据集中选择支持向量稀疏集;借助于Fisher准则最大化未标记数据集的判别能力,使用迭代优化算法求解目标函数;利用SILD进行特征提取,余弦相似性度量完成最终的人脸识别。在USCD/Honda、FRGC v2、LFW及自己搜集的人脸数据集上的实验结果表明,该算法优于其他几种三维人脸识别算法。  相似文献   

16.
We propose a novel pose-invariant face recognition approach which we call Discriminant Multiple Coupled Latent Subspace framework. It finds the sets of projection directions for different poses such that the projected images of the same subject in different poses are maximally correlated in the latent space. Discriminant analysis with artificially simulated pose errors in the latent space makes it robust to small pose errors caused due to a subject’s incorrect pose estimation. We do a comparative analysis of three popular latent space learning approaches: Partial Least Squares (PLSs), Bilinear Model (BLM) and Canonical Correlational Analysis (CCA) in the proposed coupled latent subspace framework. We experimentally demonstrate that using more than two poses simultaneously with CCA results in better performance. We report state-of-the-art results for pose-invariant face recognition on CMU PIE and FERET and comparable results on MultiPIE when using only four fiducial points for alignment and intensity features.  相似文献   

17.
采用基于非线性核空间的主分量分析法(KPCA)和线性主元空间鉴别分析法(LDA)相结合的算法,首先将人脸图像在非线性高维空间中进行主成分分量降维,然后采用基于主元空间的LDA方法对子空间再度降维,同时利用欧式距离分类器(KNN)对样本进行有效的分类识别.采用Matlab和ORL人脸库对该算法进行验证,实验证明,该算法识别性能显著提高,明显优于其他算法.  相似文献   

18.
This paper proposes a view-invariant gait recognition algorithm, which builds a unique view invariant model taking advantage of the dimensionality reduction provided by the Direct Linear Discriminant Analysis (DLDA). Proposed scheme is able to reduce the under-sampling problem (USP) that appears usually when the number of training samples is much smaller than the dimension of the feature space. Proposed approach uses the Gait Energy Images (GEIs) and DLDA to create a view invariant model that is able to determine with high accuracy the identity of the person under analysis independently of incoming angles. Evaluation results show that the proposed scheme provides a recognition performance quite independent of the view angles and higher accuracy compared with other previously proposed gait recognition methods, in terms of computational complexity and recognition accuracy.  相似文献   

19.
This paper develops a supervised discriminant technique, called graph embedding discriminant analysis (GEDA), for dimensionality reduction of high-dimensional data in small sample size problems. GEDA can be seen as a linear approximation of a multimanifold-based learning framework in which nonlocal property is taken into account besides the marginal property and local property. GEDA seeks to find a set of perfect projections that not only can impact the samples of intraclass and maximize the margin of interclass, but also can maximize the nonlocal scatter at the same time. This characteristic makes GEDA more intuitive and more powerful than linear discriminant analysis (LDA) and marginal fisher analysis (MFA). The proposed method is applied to face recognition and is examined on the Yale, ORL and AR face image databases. The experimental results show that GEDA consistently outperforms LDA and MFA when the training sample size per class is small.  相似文献   

20.
The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号