首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Discriminative common vectors for face recognition   总被引:7,自引:0,他引:7  
In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the linear discriminant analysis (LDA) method cannot be applied directly. This problem is known as the "small sample size" problem. In this paper, we propose a new face recognition method called the discriminative common vector method based on a variation of Fisher's linear discriminant analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the within-class scatter matrix of the samples in the training set while the other uses the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the discriminative common vectors. Then, the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher's linear discriminant criterion given in the paper. Our test results show that the discriminative common vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.  相似文献   

2.
Discriminant analysis is effective in extracting discriminative features and reducing dimensionality. In this paper, we propose an optimal subset-division based discrimination (OSDD) approach to enhance the classification performance of discriminant analysis technique. OSDD first divides the sample set into several subsets by using an improved stability criterion and K-means algorithm. We separately calculate the optimal discriminant vectors from each subset. Then we construct the projection transformation by combining the discriminant vectors derived from all subsets. Furthermore, we provide a nonlinear extension of OSDD, that is, the optimal subset-division based kernel discrimination (OSKD) approach. It employs the kernel K-means algorithm to divide the sample set in the kernel space and obtains the nonlinear projection transformation. The proposed approaches are applied to face and palmprint recognition, and are examined using the AR and FERET face databases and the PolyU palmprint database. The experimental results demonstrate that the proposed approaches outperform several related linear and nonlinear discriminant analysis methods.  相似文献   

3.
小样本情况下Fisher线性鉴别分析的理论及其验证   总被引:12,自引:0,他引:12       下载免费PDF全文
线性鉴别分析是特征抽取中最为经典和广泛使用的方法之一。近几年,在小样本情况下如何抽取F isher最优鉴别特征一直是许多研究者关心的问题。本文应用投影变换和同构变换的原理,从理论上解决了小样本情况下最优鉴别矢量的求解问题,即最优鉴别矢量可在一个低维空间里求得;给出了特征抽取模型,并给出求解模型的PPCA+LDA算法;在ORL人脸库3种分辨率灰度图像上进行实验。实验结果表明,PPCA+LDA算法抽取的鉴别向量有较强的特征抽取能力,在普通的最小距离分类器下能达到较高的正确识别率,而且识别结果十分稳定。  相似文献   

4.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

5.
针对人耳识别中无法避免的小样本问题,提出了基于Gabor特征和改进LDA(ILDA)的识别算法。该算法首先提取人耳局部Gabor特征,然后重新定义Fisher准则和类内分散度矩阵,再将高维空间映射到低维后寻找最优投影方向,最后利用训练样本与测试样本特征投影值的欧氏距离进行分类识别。与传统方法相比,新算法能有效解决人耳识别中的小样本问题,获得较高的识别准确率。  相似文献   

6.
吕冰  王士同 《计算机应用》2006,26(11):2781-2783
提出了一种基于核技术的求多元区别分析最佳解的K1PMDA算法,并把这一算法应用于人脸识别中。对线性人脸识别中存在两个突出问题:1、在光照、表情、姿态变化较大时,人脸图像分类是复杂的、非线性的;2、小样本问题,即当训练样本数量小于样本特征空间维数时,导致类内散布矩阵奇异。对于前一个问题,可以采用核技术提取人脸图像样本的非线性特征,对于后一个问题,采用加入一个扰动参数的扰动算法。通过对ORL,Yale Group B以及UMIST三个人脸库的实验表明,该算法是可行的、高效的。  相似文献   

7.
适用于小样本问题的具有类内保持的正交特征提取算法   总被引:1,自引:0,他引:1  
在人脸识别中, 具有正交性的特征提取算法是一类有效的特征提取算法, 但受到小样本问题的制约. 本文在正交判别保局投影的基础上, 提出了一种适用于小样本问题的具有类内保持的正交特征提取算法. 算法根据同类样本之间的空间结构信息, 重新定义了类内散度矩阵与类间散度矩阵, 进而给出了一个新的目标函数. 然而新的目标函数对于人脸识别问题, 同样存在着小样本问题. 为此本文将原始数据空间降到一个低维的子空间, 从而避免了总体散度矩阵奇异, 并在理论上证明了在该子空间中求解判别矢量集, 等价于在原空间中求解判别矢量集. 人脸库上的实验结果表明本文算法的有效性.  相似文献   

8.
对高维数据降维并选取有效特征对分类起着关键作用。针对人脸识别中存在的高维和小样本问题,从特征选取和子空间学习入手,提出了一种L_(2,1)范数正则化的不相关判别分析算法。该算法首先对训练样本矩阵进行奇异值分解;然后通过一系列变换,将原非线性的Fisher鉴别准则函数转化为线性模型;最后加入L_(2,1)范数惩罚项进行求解,得到一组最佳鉴别矢量。将训练样本和测试样本投影到该低维子空间中,利用最近欧氏距离分类器进行分类。由于加入了L_(2,1)范数惩罚项,该算法能使特征选取和子空间学习同时进行,有效改善识别性能。在ORL、YaleB及PIE人脸库上的实验结果表明,算法在有效降维的同时能进一步提高鉴别能力。  相似文献   

9.
广义最佳鉴别矢量集是用于高维空间模式分类的有效方法。在高维空间模式分类的情况下,通常训练样本数要小于特征空间维数,从而使类内散布矩阵奇异,不能直接求解。对于这种情况下求解问题,人们给出了一系列求解方法。但这些方法均是应用平均的类间距离,而没有考虑到类问分布的最坏情况。该文对此进行改进,提出了一种新的基于最小距离最大原则的广义最佳鉴别矢量集的求解方法。实验结果表明,提出的方法要优于已有的方法。  相似文献   

10.
Subspace face recognition often suffers from two problems: (1) the training sample set is small compared with the high dimensional feature vector; (2) the performance is sensitive to the subspace dimension. Instead of pursuing a single optimal subspace, we develop an ensemble learning framework based on random sampling on all three key components of a classification system: the feature space, training samples, and subspace parameters. Fisherface and Null Space LDA (N-LDA) are two conventional approaches to address the small sample size problem. But in many cases, these LDA classifiers are overfitted to the training set and discard some useful discriminative information. By analyzing different overfitting problems for the two kinds of LDA classifiers, we use random subspace and bagging to improve them respectively. By random sampling on feature vectors and training samples, multiple stabilized Fisherface and N-LDA classifiers are constructed and the two groups of complementary classifiers are integrated using a fusion rule, so nearly all the discriminative information is preserved. In addition, we further apply random sampling on parameter selection in order to overcome the difficulty of selecting optimal parameters in our algorithms. Then, we use the developed random sampling framework for the integration of multiple features. A robust random sampling face recognition system integrating shape, texture, and Gabor responses is finally constructed.  相似文献   

11.
线性鉴别分析中处理小样本问题的方法有两类:①在模式识别之前,通过降低模式样本特征向量的维数达到消除奇异性的目的;②发展算法获得低维鉴别特征。将这两种方法结合起来,解决了高维小样本情况下基于广义Fisher线性鉴别准则的不相关最优鉴别矢量集的求解问题,给出了抽取最优鉴别矢量的有效算法。  相似文献   

12.
Foley-Sammon optimal discriminant vectors using kernel approach   总被引:4,自引:0,他引:4  
A new nonlinear feature extraction method called kernel Foley-Sammon optimal discriminant vectors (KFSODVs) is presented in this paper. This new method extends the well-known Foley-Sammon optimal discriminant vectors (FSODVs) from linear domain to a nonlinear domain via the kernel trick that has been used in support vector machine (SVM) and other commonly used kernel-based learning algorithms. The proposed method also provides an effective technique to solve the so-called small sample size (SSS) problem which exists in many classification problems such as face recognition. We give the derivation of KFSODV and conduct experiments on both simulated and real data sets to confirm that the KFSODV method is superior to the previous commonly used kernel-based learning algorithms in terms of the performance of discrimination.  相似文献   

13.
提出了一种新的非线性鉴别分析算法——极小化类内散布的大间距非线性鉴别分析。该算法的主要思想是将原始样本映射到更高维的空间中,利用核技术对传统的大间距分类算法进行改进,在新的高维空间中利用再生核技术寻找核鉴别矢量,使得在这个新的空间中核类内散度尽可能的小。在ORL人脸数据库上进行实验,分析了识别率及识别时间,结果表明该方法具有一定优势。  相似文献   

14.
Class-incremental generalized discriminant analysis   总被引:2,自引:0,他引:2  
Zheng W 《Neural computation》2006,18(4):979-1006
Generalized discriminant analysis (GDA) is the nonlinear extension of the classical linear discriminant analysis (LDA) via the kernel trick. Mathematically, GDA aims to solve a generalized eigenequation problem, which is always implemented by the use of singular value decomposition (SVD) in the previously proposed GDA algorithms. A major drawback of SVD, however, is the difficulty of designing an incremental solution for the eigenvalue problem. Moreover, there are still numerical problems of computing the eigenvalue problem of large matrices. In this article, we propose another algorithm for solving GDA as for the case of small sample size problem, which applies QR decomposition rather than SVD. A major contribution of the proposed algorithm is that it can incrementally update the discriminant vectors when new classes are inserted into the training set. The other major contribution of this article is the presentation of the modified kernel Gram-Schmidt (MKGS) orthogonalization algorithm for implementing the QR decomposition in the feature space, which is more numerically stable than the kernel Gram-Schmidt (KGS) algorithm. We conduct experiments on both simulated and real data to demonstrate the better performance of the proposed methods.  相似文献   

15.
针对人脸识别中的非线性特征提取和有标记样本不足问题,提出了在核空间具有正交性半监督鉴别矢量的计算方法。算法利用核函数将人脸数据映射到高维非线性空间,在该空间采用边界Fisher判别分析(Marginal Fisher Analysis,MFA)算法将少量有类别标签样本进行降维,同时采用无监督鉴别投影(Unsupervised Discriminant Projection,UDP)对大量无标签样本进行学习,以半监督的方法构造算法的目标函数,在特征值求解时以正交方式找出最优投影向量,进行人脸识别。通过实验,在ORL和YALE人脸数据库上验证了该算法的有效性。  相似文献   

16.
詹宇斌  殷建平  刘新旺 《自动化学报》2010,36(12):1645-1654
传统基于降维技术的人脸特征提取需要将图像转换成更高维的向量, 从而加剧维数灾难问题, 对于采用Fisher优化准则的特征提取, 这也会使小样本问题更加突出. 基于图像的矩阵表示, 本文提出了一种新的基于大间距准则和矩阵双向投影技术的人脸特征提取方法(Maximum margin criterion and image matrix bidirectional projection, MMC-MBP). 该方法一方面在计算散度矩阵时引入了能保持数据局部性的Laplacian矩阵, 以保持数据的流形结构, 从而提高识别正确率; 另一方面采用了有效且稳定的大间距的优化准则即最大化矩阵迹差, 能克服利用Fisher准则所带来的小样本问题; 更重要的, MMC-MBP方法给出了求解最优双向投影矩阵的迭代计算过程, 该迭代求解过程能保证目标函数的单调递增性、收敛性以及投影矩阵的收敛性, 从而成功解决了传统基于张量(矩阵)投影技术的特征提取方法特征维数过高或者无收敛解的问题. 最后广泛而系统的人脸识别实验表明, MMC-MBP的迭代求解过程能很快收敛, 且相比Eigenfaces, Fisherfaces, Laplacianfaces等脸识别方法, 具有更高的识别正确率, 是一种有效的人脸特征提取方法.  相似文献   

17.
一种用于人脸识别的非线性鉴别特征融合方法   总被引:2,自引:0,他引:2  
最近,在人脸等图像识别领域,用于抽取非线性特征的核方法如核Fisher鉴别分析(KFDA)已经取得成功并得到了广泛应用,但现有的核方法都存在这样的问题,即构造特征空间中的核矩阵所耗费的计算量非常大.而且,抽取得到的单类特征往往不能获得到令人满意的识别结果.提出了一种用于人脸识别的非线性鉴别特征融合方法,即首先利用小波变换和奇异值分解对原始输入样本进行降雏变换,抽取同一样本空间的两类特征,然后利用复向量将这两类特征组合在一起,构成一复特征向量空间,最后在该空间中进行最优鉴别特征抽取.在ORL标准人脸库上的试验结果表明所提方法不仅在识别性能上优于现有的核Fisher鉴别分析方法,而且,在ORL人脸库上的特征抽取速度提高了近8倍.  相似文献   

18.
Standard support vector machines (SVMs) training algorithms have O(l 3) computational and O(l 2) space complexities, where l is the training set size. It is thus computationally infeasible on very large data sets. To alleviate the computational burden in SVM training, we propose an algorithm to train SVMs on a bound vectors set that is extracted based on Fisher projection. For linear separate problems, we use linear Fisher discriminant to compute the projection line, while for non-linear separate problems, we use kernel Fisher discriminant to compute the projection line. For each case, we select a certain ratio samples whose projections are adjacent to those of the other class as bound vectors. Theoretical analysis shows that the proposed algorithm is with low computational and space complexities. Extensive experiments on several classification benchmarks demonstrate the effectiveness of our approach.  相似文献   

19.
A new method of feature fusion and its application in image recognition   总被引:9,自引:0,他引:9  
  相似文献   

20.
具有统计不相关性的最佳鉴别特征空间的维数定理   总被引:5,自引:1,他引:5  
提出并严格证明了具有统计不相关性的最佳鉴别特征空间的维数定理:对含有L个类别的模式识别问题,具有统计不相关性的最佳鉴别特征空间的维数为(L-1):说明了具有统计不相关性的最佳鉴别变的与Wilks所提出的经典的模式特征抽取方法的关系。在一定的条件下,具有统计不相害性的最佳鉴别矢量集等价于Wilks所提出的经典鉴别矢量集。经典的模式特征抽取方法可以用来在不损失任何Fisher鉴别信息的意义下,对含有L个类别的模式识别问题。抽取(L-1)个具有统计不相关性的最佳鉴别特征。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号