首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 30 毫秒
1.
Linear discriminant analysis (LDA) often suffers from the small sample size problem when dealing with high-dimensional face data. Random subspace can effectively solve this problem by random sampling on face features. However, it remains a problem how to construct an optimal random subspace for discriminant analysis and perform the most efficient discriminant analysis on the constructed random subspace. In this paper, we propose a novel framework, random discriminant analysis (RDA), to handle this problem. Under the most suitable situation of the principal subspace, the optimal reduced dimension of the face sample is discovered to construct a random subspace where all the discriminative information in the face space is distributed in the two principal subspaces of the within-class and between-class matrices. Then we apply Fisherface and direct LDA, respectively, to the two principal subspaces for simultaneous discriminant analysis. The two sets of discriminant analysis features from dual principal subspaces are first combined at the feature level, and then all the random subspaces are further integrated at the decision level. With the discriminating information fusion at the two levels, our method can take full advantage of useful discriminant information in the face space. Extensive experiments on different face databases demonstrate its performance.  相似文献   

2.
Eigenface (PCA) and Fisherface (LDA) are two of the most commonly used subspace techniques in the area of face recognition. PCA maximizes not only intersubject variation but also the intrasubject variation without considering the class label even if they are available. LDA is prone to overfitting when the training data set is small, which wildly exists in face recognition. In this work, we present a binary feature selection (BFS) method to choose the most suitable set of eigenfaces for classification when only a small number of training samples per subject are available. In the proposed method, we make use of class label, look on two subjects as a group, and then the most suitable eigenfaces that help to identify these two subjects are picked out to form the binary classifier. The final classifier is the integration of these binary classifiers by voting. Experiments on the AR and AT&T face databases with small training data set prove that our proposed method outperforms not only traditional PCA and LDA but also some state of the art methods.  相似文献   

3.
小样本问题和对局部变化(如遮挡、表情、光照等)识别的不鲁棒性是线性判别分析(LDA)在处理人脸图像时所常面临的问题。针对LDA的这些不足,提出了一种基于LDA的半随机子空间方法(SemiRS-LDA)。与传统的基于整个人脸样本特征集采样的随机子空间方法不同的是,SemiRS-LDA将随机采样建立在人脸图像的子图像上。该方法首先将人脸图像集划分成若干个子图像集,然后将随机子空间方法应用于每个子图像集上并构建多个LDA分类器,最后使用投票方法将各分类器进行组合。在两个标准人脸数据库(AR、ORL)上进行了实验,结果表明了所提方法不仅能获得较高的识别性能,而且对图像的光线、遮挡等也具有较强的鲁棒性。  相似文献   

4.
Linear discriminant analysis (LDA) is one of the most effective feature extraction methods in statistical pattern recognition, which extracts the discriminant features by maximizing the so-called Fisher’s criterion that is defined as the ratio of between-class scatter matrix to within-class scatter matrix. However, classification of high-dimensional statistical data is usually not amenable to standard pattern recognition techniques because of an underlying small sample size (SSS) problem. A popular approach to the SSS problem is the removal of non-informative features via subspace-based decomposition techniques. Motivated by this viewpoint, many elaborate subspace decomposition methods including Fisherface, direct LDA (D-LDA), complete PCA plus LDA (C-LDA), random discriminant analysis (RDA) and multilinear discriminant analysis (MDA), etc., have been developed, especially in the context of face recognition. Nevertheless, how to search a set of complete optimal subspaces for discriminant analysis is still a hot topic of research in area of LDA. In this paper, we propose a novel discriminant criterion, called optimal symmetrical null space (OSNS) criterion that can be used to compute the Fisher’s maximal discriminant criterion combined with the minimal one. Meanwhile, by the reformed criterion, the complete symmetrical subspaces based on the within-class and between-class scatter matrices are constructed, respectively. Different from the traditional subspace learning criterion that derives only one principal subspace, in our approach two null subspaces and their orthogonal complements were all obtained through the optimization of OSNS criterion. Therefore, the algorithm based on OSNS has the potential to outperform the traditional LDA algorithms, especially in the cases of small sample size. Experimental results conducted on the ORL, FERET, XM2VTS and NUST603 face image databases demonstrate the effectiveness of the proposed method.  相似文献   

5.
It is well known that the applicability of independent component analysis (ICA) to high-dimensional pattern recognition tasks such as face recognition often suffers from two problems. One is the small sample size problem. The other is the choice of basis functions (or independent components). Both problems make ICA classifier unstable and biased. In this paper, we propose an enhanced ICA algorithm by ensemble learning approach, named as random independent subspace (RIS), to deal with the two problems. Firstly, we use the random resampling technique to generate some low dimensional feature subspaces, and one classifier is constructed in each feature subspace. Then these classifiers are combined into an ensemble classifier using a final decision rule. Extensive experimentations performed on the FERET database suggest that the proposed method can improve the performance of ICA classifier.  相似文献   

6.
Discriminative common vectors for face recognition   总被引:7,自引:0,他引:7  
In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the linear discriminant analysis (LDA) method cannot be applied directly. This problem is known as the "small sample size" problem. In this paper, we propose a new face recognition method called the discriminative common vector method based on a variation of Fisher's linear discriminant analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the within-class scatter matrix of the samples in the training set while the other uses the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the discriminative common vectors. Then, the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher's linear discriminant criterion given in the paper. Our test results show that the discriminative common vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.  相似文献   

7.
线性判别分析算法是一种经典的特征提取方法,但其仅在大样本情况下适用。本文针对传统线性判别分析算法面临的小样本问题和秩限制问题,提出了一种改进的线性判别分析算法ILDA。该方法在矩阵指数的基础上,重新定义了类内离散度矩阵和类间离散度矩阵,有效地同时提取类内离散度矩阵零空间和非零空间中的信息。若干人脸数据库上的比较实验表明了ILDA在人脸识别方面的有效性。  相似文献   

8.
A unified framework for subspace face recognition   总被引:2,自引:0,他引:2  
PCA, LDA, and Bayesian analysis are the three most representative subspace face recognition approaches. In this paper, we show that they can be unified under the same framework. We first model face difference with three components: intrinsic difference, transformation difference, and noise. A unified framework is then constructed by using this face difference model and a detailed subspace analysis on the three components. We explain the inherent relationship among different subspace methods and their unique contributions to the extraction of discriminating information from the face difference. Based on the framework, a unified subspace analysis method is developed using PCA, Bayes, and LDA as three steps. A 3D parameter space is constructed using the three subspace dimensions as axes. Searching through this parameter space, we achieve better recognition performance than standard subspace methods.  相似文献   

9.
半监督学习过程中,由于无标记样本的随机选择造成分类器性能降低及不稳定性的情况经常发生;同时,面对仅包含少量有标记样本的高维数据的分类问题,传统的半监督学习算法效果不是很理想.为了解决这些问题,本文从探索数据样本空间和特征空间两个角度出发,提出一种结合随机子空间技术和集成技术的安全半监督学习算法(A safe semi-supervised learning algorithm combining stochastic subspace technology and ensemble technology,S3LSE),处理仅包含极少量有标记样本的高维数据分类问题.首先,S3LSE采用随机子空间技术将高维数据集分解为B个特征子集,并根据样本间的隐含信息对每个特征子集优化,形成B个最优特征子集;接着,将每个最优特征子集抽样形成G个样本子集,在每个样本子集中使用安全的样本标记方法扩充有标记样本,生成G个分类器,并对G个分类器进行集成;然后,对B个最优特征子集生成的B个集成分类器再次进行集成,实现高维数据的分类.最后,使用高维数据集模拟半监督学习过程进行实验,实验结果表明S3LSE具有较好的性能.  相似文献   

10.
Face recognition using laplacianfaces   总被引:47,自引:0,他引:47  
We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.  相似文献   

11.
Linear Discriminant Analysis (LDA) is a widely used technique for pattern classification. It seeks the linear projection of the data to a low dimensional subspace where the data features can be modelled with maximal discriminative power. The main computation in LDA is the dot product between LDA base vector and the data point which involves costly element-wise floating point multiplications. In this paper, we present a fast linear discriminant analysis method called binary LDA (B-LDA), which possesses the desirable property that the subspace projection operation can be computed very efficiently. We investigate the LDA guided non-orthogonal binary subspace method to find the binary LDA bases, each of which is a linear combination of a small number of Haar-like box functions. We also show that B-LDA base vectors are nearly orthogonal to each other. As a result, in the non-orthogonal vector decomposition process, the computationally intensive pseudo-inverse projection operator can be approximated by the direct dot product without causing significant distance distortion. This direct dot product projection can be computed as a linear combination of the dot products with a small number of Haar-like box functions which can be efficiently evaluated using the integral image. The proposed approach is applied to face recognition on ORL and FERET dataset. Experiments show that the discriminative power of binary LDA is preserved and the projection computation is significantly reduced.  相似文献   

12.
为了改善Fisherface方法中判别特征子空间正交性并提高识别率,提出了矩阵对称性的Fisherface算法。该方法利用PCA进行降维来消除小样本问题,对传统的Fisher准则进行修改使矩阵具有对称性,用构造的对称矩阵进行分类识别。在ORL标准人脸库上进行的实验表明,矩阵对称性方法的识别率明显地高于传统的方法,而且识别结果比较稳定受训练集影响较小,具有很好的实用性。  相似文献   

13.
对基于滑动窗口进行样本扩充的单样本人脸识别方法进行了改进,改进后算法一方面在识别阶段采用了比原算法更少的特征,提高了识别的时间效率;另一方面在训练阶段获得原始样本的镜像样本作为附加的训练、注册集合,通过学习训练形成双子空间,识别结果由双子空间通过决策融合得到,提高了对测试样本变化的鲁棒性。在ORL人脸库和Feret子集人脸库上的实验表明,该算法在识别率上优于同类算法。  相似文献   

14.
The problem addressed in this letter concerns the multiclassifier generation by a random subspace method (RSM). In the RSM, the classifiers are constructed in random subspaces of the data feature space. In this letter, we propose an evolved feature weighting approach: in each subspace, the features are multiplied by a weight factor for minimizing the error rate in the training set. An efficient method based on particle swarm optimization (PSO) is here proposed for finding a set of weights for each feature in each subspace. The performance improvement with respect to the state-of-the-art approaches is validated through experiments with several benchmark data sets.  相似文献   

15.
At the present, several applications need to classify high dimensional points belonging to highly unbalanced classes. Unfortunately, when the training set cardinality is small compared to the data dimensionality (“small sample size” problem) the classification performance of several well-known classifiers strongly decreases. Similarly, the classification accuracy of several discriminative methods decreases when non-linearly separable, and unbalanced, classes are treated. In this paper we firstly survey state of the art methods that employ improved versions of Linear Discriminant Analysis (LDA) to deal with the above mentioned problems; secondly, we propose a family of classifiers based on the Fisher subspace estimation, which efficiently deal with the small sample size problem, non-linearly separable classes, and unbalanced classes. The promising results obtained by the proposed techniques on benchmark datasets and the comparison with state of the art predictors show the efficacy of the proposed techniques.  相似文献   

16.
Linear discriminant analysis (LDA) and biased discriminant analysis (BDA) are two effective techniques for dimension reduction, which pay attention to different roles of the positive and negative samples in finding discriminating subspace. However, the drawbacks of these two methods are obvious: LDA has limited efficiency in classifying sample data from subclasses with different distributions, and BDA does not account for the underlying distribution of negative samples. In order to effectively exploit favorable attributes of both BDA and LDA and avoid their unfavorable ones, we propose a novel adaptive discriminant analysis (ADA) for image classification. ADA can find an optimal discriminative subspace with adaptation to different sample distributions. In addition, three novel variants and extensions of ADA are further proposed: 1) integrated boosting (i.Boosting), which enhances and combines a set of ADA classifiers into a more powerful one. i.Boosting integrates feature re-weighting, relevance feedback, and AdaBoost into one framework. With affordable computational cost, i.Boosting can provide a unified and stable solution to ADA prediction result. 2) Fast adaptive discriminant analysis (FADA). Instead of searching parameters, FADA can directly find a close-to-optimal projection very fast based on different sample distributions. 3) Two-dimensional adaptive discriminant analysis (2DADA). As opposed to ADA, 2DADA is based on 2-D image matrix representation rather than 1-D vector. So it is simpler, more straightforward, and has lower time complexity to use for image feature extraction. Extensive experiments on synthetic data, UCI benchmark data sets, hand-digit data set, four facial image data sets, and COREL color image data sets show the superior performance of our proposed approaches.  相似文献   

17.
In this paper, a robust face recognition algorithm is proposed, which is based on the elastic graph matching (EGM) and discriminative feature analysis algorithm. We introduce a cost function for the EGM taking account of variations in face pose and facial expressions, and propose its optimization procedure. Our proposed cost function uses a set of Gabor-wavelet-based features, called robust jet, which are robust against the variations. The robust jet is defined in terms of discrete Fourier transform coefficients of Gabor coefficients. To cope with the difference between face poses of test face and reference faces, 2 x 2 warping matrix is incorporated in the proposed cost function. For the discriminative feature analysis, linear projection discriminant analysis and kernel-based projection discriminant analysis are introduced. These methods are motivated to solve the small-size problem of training samples. The basic idea of PDA is that a class is represented by a subspace spanned by some training samples of the class instead of using sample mean vector, that the distance from a pattern to a class is defined by using the error vector between the pattern and its projection to the subspace representing the class, and that an optimum feature selection rule is developed using the distance concept in a similar way as in the conventional linear discriminant analysis. In order to evaluate the performance of our face recognition algorithm, we carried out some experiments using the well-known FERET face database, and compared the performance with recently developed approaches. We observed that our algorithm outperformed the compared approaches.  相似文献   

18.
This paper addresses the dimension reduction problem in Fisherface for face recognition. When the number of training samples is less than the image dimension (total number of pixels), the within-class scatter matrix (Sw) in Linear Discriminant Analysis (LDA) is singular, and Principal Component Analysis (PCA) is suggested to employ in Fisherface for dimension reduction of Sw so that it becomes nonsingular. The popular method is to select the largest nonzero eigenvalues and the corresponding eigenvectors for LDA. To attenuate the illumination effect, some researchers suggested removing the three eigenvectors with the largest eigenvalues and the performance is improved. However, as far as we know, there is no systematic way to determine which eigenvalues should be used. Along this line, this paper proposes a theorem to interpret why PCA can be used in LDA and an automatic and systematic method to select the eigenvectors to be used in LDA using a Genetic Algorithm (GA). A GA-PCA is then developed. It is found that some small eigenvectors should also be used as part of the basis for dimension reduction. Using the GA-PCA to reduce the dimension, a GA-Fisher method is designed and developed. Comparing with the traditional Fisherface method, the proposed GA-Fisher offers two additional advantages. First, optimal bases for dimensionality reduction are derived from GA-PCA. Second, the computational efficiency of LDA is improved by adding a whitening procedure after dimension reduction. The Face Recognition Technology (FERET) and Carnegie Mellon University Pose, Illumination, and Expression (CMU PIE) databases are used for evaluation. Experimental results show that almost 5 % improvement compared with Fisherface can be obtained, and the results are encouraging.  相似文献   

19.
目前的人脸识别算法常常忽视训练过程中噪声的影响,特别是在训练数据和待测数据都受到噪声污染的情况下,识别性能会明显下降。针对含有光照变化、伪装、遮挡及表情变化等较大噪声的人脸识别问题,提出了一种基于低秩子空间投影和Gabor特征的稀疏表示人脸识别算法。该算法首先通过低秩矩阵恢复算法得到训练样本的潜在低秩结构和稀疏误差结构;然后利用主成分分析法找到低秩结构的Gabor特征所在低秩子空间的变换矩阵;再通过变换矩阵将所有样本的Gabor特征向量投影到低秩子空间上,在该低秩子空间上使用稀疏表示分类算法进行最终的分类识别。在Extend Yale B和AR数据库上的实验表明,新算法具有较高的识别率和较强的抗干扰能力。  相似文献   

20.
基于部件的级联线性判别分析人脸识别   总被引:1,自引:0,他引:1  
文章提出一种基于人脸部件表示的级联线性判别分析人脸识别方法。该方法将人脸图像划分为具有交叠区域的多个部件,对每个部件应用线性判别分析以寻找该部件的判别方向,然后对所有部件应用线性判别分析以寻找总体最优判别方向。以从该级联线性判别分析提取的特征作为人脸描述。在FERET人脸库上的人脸识别和人脸确认的实验结果表明,该方法优于传统的基于全局图像的Fisherface方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号