首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Appearance-based methods, especially linear discriminant analysis (LDA), have been very successful in facial feature extraction, but the recognition performance of LDA is often degraded by the so-called "small sample size" (SSS) problem. One popular solution to the SSS problem is principal component analysis (PCA) + LDA (Fisherfaces), but the LDA in other low-dimensional subspaces may be more effective. In this correspondence, we proposed a novel fast feature extraction technique, bidirectional PCA (BDPCA) plus LDA (BDPCA + LDA), which performs an LDA in the BDPCA subspace. Two face databases, the ORL and the Facial Recognition Technology (FERET) databases, are used to evaluate BDPCA + LDA. Experimental results show that BDPCA + LDA needs less computational and memory requirements and has a higher recognition accuracy than PCA + LDA.  相似文献   

2.
A two-stage linear discriminant analysis via QR-decomposition   总被引:3,自引:0,他引:3  
Linear discriminant analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using principal component analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of singular value decomposition or generalized singular value decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.  相似文献   

3.
Block-wise 2D kernel PCA/LDA for face recognition   总被引:1,自引:0,他引:1  
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.  相似文献   

4.
A unified framework for subspace face recognition   总被引:2,自引:0,他引:2  
PCA, LDA, and Bayesian analysis are the three most representative subspace face recognition approaches. In this paper, we show that they can be unified under the same framework. We first model face difference with three components: intrinsic difference, transformation difference, and noise. A unified framework is then constructed by using this face difference model and a detailed subspace analysis on the three components. We explain the inherent relationship among different subspace methods and their unique contributions to the extraction of discriminating information from the face difference. Based on the framework, a unified subspace analysis method is developed using PCA, Bayes, and LDA as three steps. A 3D parameter space is constructed using the three subspace dimensions as axes. Searching through this parameter space, we achieve better recognition performance than standard subspace methods.  相似文献   

5.
主成分分析算法(PCA)和线性鉴别分析算法(LDA)被广泛用于人脸识别技术中,但是PCA由于其计算复杂度高,致使人脸识别的实时性达不到要求.线性鉴别分析算法存在"小样本"和"边缘类"问题,降低了人脸识别的准确性.针对上述问题,提出使用二维主成分分析法(2DPCA)与改进的线性鉴别分析法相融合的方法.二维主成分分析法提取...  相似文献   

6.
分别用降秩线性判别分析(RRLDA)、降秩二次判别分析(RRQDA)和主成分分析+线性判别分析(PCA+LDA)三种模型对数据进行了分析,并在元音测试数据集上进行了测试.分别画出了这三种模型的误分类率曲线,画出了RRLDA和PCA+LDA分别降至二维后的最优分类面.从实验结果中可以发现,RRLDA模型的实验结果优于PC...  相似文献   

7.
传统的PCA和LDA算法受限于“小样本问题”,且对象素的高阶相关性不敏感。本文将核函数方法与规范化LDA相结合,将原图像空间通过非线性映射变换到高维特征空间,并借助于“核技巧”在新的空间中应用鉴别分析方法。通过对ORL人脸库的大量实验研究表明,本文方法在特征提取方面明显优于PCA,KPCA,LDA等其他传统的人脸识别方法,在简化分类器的同时,也可以获得高识别率。  相似文献   

8.
We present a modular linear discriminant analysis (LDA) approach for face recognition. A set of observers is trained independently on different regions of frontal faces and each observer projects face images to a lower-dimensional subspace. These lower-dimensional subspaces are computed using LDA methods, including a new algorithm that we refer to as direct, weighted LDA or DW-LDA. DW-LDA combines the advantages of two recent LDA enhancements, namely direct LDA (D-LDA) and weighted pairwise Fisher criteria. Each observer performs recognition independently and the results are combined using a simple sum-rule. Experiments compare the proposed approach to other face recognition methods that employ linear dimensionality reduction. These experiments demonstrate that the modular LDA method performs significantly better than other linear subspace methods. The results also show that D-LDA does not necessarily perform better than the well-known principal component analysis followed by LDA approach. This is an important and significant counterpoint to previously published experiments that used smaller databases. Our experiments also indicate that the new DW-LDA algorithm is an improvement over D-LDA.  相似文献   

9.
传统的PCA和LDA算法受限于“小样本问题”,且对像素的高阶相关性不敏感。论文将核函数方法与规范化LDA相结合,将原图像空间通过非线性映射变换到高维特征空间,并借助于“核技巧”在新的空间中应用鉴别分析方法。通过对ORL人脸库的大量实验表明,该方法在特征提取方面优于PCA,KPCA,LDA等其他方法,在简化分类器的同时,也可以获得高识别率。  相似文献   

10.
Linear discriminant analysis (LDA) is one of the most effective feature extraction methods in statistical pattern recognition, which extracts the discriminant features by maximizing the so-called Fisher’s criterion that is defined as the ratio of between-class scatter matrix to within-class scatter matrix. However, classification of high-dimensional statistical data is usually not amenable to standard pattern recognition techniques because of an underlying small sample size (SSS) problem. A popular approach to the SSS problem is the removal of non-informative features via subspace-based decomposition techniques. Motivated by this viewpoint, many elaborate subspace decomposition methods including Fisherface, direct LDA (D-LDA), complete PCA plus LDA (C-LDA), random discriminant analysis (RDA) and multilinear discriminant analysis (MDA), etc., have been developed, especially in the context of face recognition. Nevertheless, how to search a set of complete optimal subspaces for discriminant analysis is still a hot topic of research in area of LDA. In this paper, we propose a novel discriminant criterion, called optimal symmetrical null space (OSNS) criterion that can be used to compute the Fisher’s maximal discriminant criterion combined with the minimal one. Meanwhile, by the reformed criterion, the complete symmetrical subspaces based on the within-class and between-class scatter matrices are constructed, respectively. Different from the traditional subspace learning criterion that derives only one principal subspace, in our approach two null subspaces and their orthogonal complements were all obtained through the optimization of OSNS criterion. Therefore, the algorithm based on OSNS has the potential to outperform the traditional LDA algorithms, especially in the cases of small sample size. Experimental results conducted on the ORL, FERET, XM2VTS and NUST603 face image databases demonstrate the effectiveness of the proposed method.  相似文献   

11.
Face recognition using laplacianfaces   总被引:47,自引:0,他引:47  
We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.  相似文献   

12.
Principal component analysis (PCA) and linear discriminant analysis (LDA) are two important feature extraction methods and have been widely applied in a variety of areas. A limitation of PCA and LDA is that when dealing with image data, the image matrices must be first transformed into vectors, which are usually of very high dimensionality. This causes expensive computational cost and sometimes the singularity problem. Recently two methods called two-dimensional PCA (2DPCA) and two-dimensional LDA (2DLDA) were proposed to overcome this disadvantage by working directly on 2-D image matrices without a vectorization procedure. The 2DPCA and 2DLDA significantly reduce the computational effort and the possibility of singularity in feature extraction. In this paper, we show that these matrices based 2-D algorithms are equivalent to special cases of image block based feature extraction, i.e., partition each image into several blocks and perform standard PCA or LDA on the aggregate of all image blocks. These results thus provide a better understanding of the 2-D feature extraction approaches.  相似文献   

13.
PCA plus LDA is a popular framework for linear discriminant analysis (LDA) in high dimensional and singular case. In this paper, we focus on building a theoretical foundation for this framework. Moreover, we point out the weakness of the previous LDA based methods, and suggest a complete PCA plus LDA algorithm. Experimental results on ORL face image database indicate that the proposed method is more effective than the previous ones.  相似文献   

14.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

15.
Eigenface (PCA) and Fisherface (LDA) are two of the most commonly used subspace techniques in the area of face recognition. PCA maximizes not only intersubject variation but also the intrasubject variation without considering the class label even if they are available. LDA is prone to overfitting when the training data set is small, which wildly exists in face recognition. In this work, we present a binary feature selection (BFS) method to choose the most suitable set of eigenfaces for classification when only a small number of training samples per subject are available. In the proposed method, we make use of class label, look on two subjects as a group, and then the most suitable eigenfaces that help to identify these two subjects are picked out to form the binary classifier. The final classifier is the integration of these binary classifiers by voting. Experiments on the AR and AT&T face databases with small training data set prove that our proposed method outperforms not only traditional PCA and LDA but also some state of the art methods.  相似文献   

16.
线性判别分析(LDA)是最经典的子空间学习和有监督判别特征提取方法之一.受到流形学习的启发,近年来众多基于LDA的改进方法被提出.尽管出发点不同,但这些算法本质上都是基于欧氏距离来度量样本的空间散布度.欧氏距离的非线性特性带来了如下两个问题:1)算法对噪声和异常样本点敏感; 2)算法对流形或者是多模态数据集中局部散布度较大的样本点过度强调,导致特征提取过程中数据的本质结构特征被破坏.为了解决这些问题,提出一种新的基于非参数判别分析(NDA)的维数约减方法,称作动态加权非参数判别分析(DWNDA). DWNDA采用动态加权距离来计算类间散布度和类内散布度,不仅能够保留多模态数据集的本质结构特征,还能有效地利用边界样本点对之间的判别信息.因此, DWNDA在噪声实验中展现出对噪声和异常样本的强鲁棒性.此外,在人脸和手写体数据库上进行实验, DWNDA方法均取得了优异的实验结果.  相似文献   

17.
为了解决LDA 对复杂分布数据的表达问题,本文提出了一种新的非参数形式的散度矩阵构造方法。该方法 能更好的刻画分类边界信息,并保留更多对分类有用的信息。同时针对小样本问题中非参数结构形式的类内散度矩阵可能奇 异,提出了一种两阶段鉴别分析方法对准则函数进行了最优化求解。该方法通过奇异值分解把人脸图像投影到混合散度矩阵 的主元空间,使类内散度矩阵在投影空间中是非奇异的,通过CS 分解,从理论上分析了同时对角化散度矩阵的求解,并证明了 得到的投影矩阵满足正交约束条件。在ORL,Yale 和YaleB 人脸库上测试的结果显示,改进的算法在性能上优于PCA+LDA, ULDA 和OLDA 等子空间方法。  相似文献   

18.
主分量分析(Principal Component Analysis,PCA)是模式识别领域中一种重要的特征抽取方法,该方法通过K-L展开式来抽取样本的主要特征。基于此,提出一种拓展的PCA人脸识别方法,即分块排序PCA人脸识别方法(MSPCA)。分块排序PCA方法先对图像矩阵进行分块,对所有分块得到的子图像矩阵利用PCA方法求出矩阵的所有特征值所对应的特征向量并加以标识;然后找出这些所有的特征值中k个最大的特征值所对应的特征向量,用这些特征向量分别去抽取所属的子图像的特征;最后,在MSPCA的基础上,将抽取子图像所得到的特征矩阵合并,把这个合并后的特征矩阵作为新的样本进行PCA+LDA。与PCA和PCA+LDA方法相比,分块排序PCA由于使用子图像矩阵,可以避免使用奇异值分解理论,从而更加简便。在ORL人脸库上的实验结果表明,所提出的方法在识别性能上明显优于经典的PCA和PCA+LDA方法。  相似文献   

19.
在基于加速度信号的人体行为识别中,LDA是较常用的特征降维方法之一,然而LDA并不直接以训练误差作为目标函数,无法保证获得训练误差最小的投影空间。针对这一情况,采用基于GA优化的LDA进行特征选择。提取加速度信号特征,利用PCA方法解决“小样本问题”,通过GA调整LDA中类间离散度矩阵的特征值矢量,使获得的投影空间训练误差最小。采用SVM对7种日常行为进行分类。实验结果表明,与单独采用PCA和采用PCA+LDA方法相比,基于GA优化的LDA算法在保证较高识别率的同时能有效降低特征维数并减小分类误差,最终测试样本的识别率可达95.96%。  相似文献   

20.
在有监督学习中,每类数据具有独特的特性且类与类之间是独立的.受此启发,提出了基于等同邻域的投影算法.新算法通过计算一组基函数为每类数据在低维空间中寻找高度对称的等同邻域空间.等同邻域可以通过构建正则单纯形得到,基函数可以通过凸优化得到.对于测试样本,可以通过基函数映射到低维的等同邻域空间,与各等同邻域空间中心的距离决定其类别归属,而不必计算与所有训练样本间的距离.实验证明了新方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号