共查询到19条相似文献,搜索用时 171 毫秒
1.
利用标准化LDA进行人脸识别 总被引:13,自引:0,他引:13
线性判别分析(LDA)是一种较为普遍的用于特征提取的线性分类方法。提出一种基于LDA的人脸识别方法--标准化LDA,该方法克服了传统LDA方法的缺点,重新定义了样本类间离散度矩阵,在原始定义的基础上增加一个由类间距离决定的可变权函数,使得在选择投地,能够更好地分开各个类的样本;同时,它采用一种合理而有效的方法解决矩阵奇异的问题,即保留样本类内离散度矩阵的零空间,因为这个空间包含了最具有判别能力的信息。在这个零空间里,寻找对应于样本类间离散度矩阵的较大特征值的特征向量作为最后降维的转换矩阵。实验结果显示,在人脸识别中,与传统LDA相比,该方法有更好的识别率。标准化LDA也可以用于其他图像识别问题。 相似文献
2.
结合零空间法和F-LDA的人脸识别算法 总被引:2,自引:0,他引:2
线性判别分析(LDA)是一种常用的线性特征提取方法。传统LDA应用于人脸识别时主要存在两个问题:1)小样本问题,即由于训练样本不足引起矩阵奇异; 2)优化准则函数并不直接与识别率相关。提出了一种新的能同时解决以上两个问题的基于LDA的人脸识别算法。首先,通过重新定义样本的类内散布矩阵和类间散布矩阵,提出了一种新的零空间法。然后把这种新的零空间法与F LDA(Fractional LDA)算法相结合,得到一种对人脸识别更有效的特征提取方法。实验结果表明,这种新算法具有较高的识别率。 相似文献
3.
一种改进的线性判别分析算法MLDA 总被引:1,自引:0,他引:1
线性判别分析(LDA)是模式识别方法之一,已广泛应用于模式识别、数据分析等诸多领域。线性判别分析法寻找的是有效分类的方向。而当样本维数远大于样本个数(即小样本问题)时,LDA便束手无策。为有效解决线性判别分析法的小样本问题,提出了一种改进的LDA算法——MLDA。该算法将类内离散度矩阵进行标量化处理,有效地避免了对类内离散度矩阵求逆。通过实验证明MLDA在一定程度上解决了经典LDA的小样本问题。 相似文献
4.
线性判别分析(LDA)是一种普遍用于特征提取的线性分类方法。但将LDA直接用于人脸识别会遇到小样本问题和秩限制问题。为了解决以上问题,提出一种基于多阶矩阵组合的LDA算法——MLDA。该算法重新定义了传统LDA中的类内离散度矩阵Sw,使传统Fisher准则具有更好的健壮性和适应性。若干人脸数据库上的比较实验证明了MLDA的有效性。 相似文献
5.
基于改进LDA算法的人脸识别 总被引:1,自引:0,他引:1
提出一种基于改进LDA的人脸识别算法,该算法克服传统LDA算法的缺点,重新定义样本类间离散度矩阵和Fisher准则,从而保留住最有辨别力的信息,增强算法的识别率.实验结果证明该算法是可行的,与传统的PCA LDA算法比较,具有较高的识别率. 相似文献
6.
7.
运用小波进行图像分解提取低频子带图,并利用优化的线性判别分析(LDA)算法寻找最优投影子空间,从而映射提取人脸特征,实现人脸的分类识别。该方法避免了传统LDA算法中类内离散度矩阵非奇异的要求,解决了边缘类重叠问题,具有更广泛的应用空间。实验表明:该方法优于传统的LDA方法和主分量分析(PCA)方法。 相似文献
8.
9.
线性判别分析(LDA)是一种常用的特征提取方法,其目标是提取特征后样本的类间离散度和类内离散度的比值最大,即各类样本在特征空间中有最佳的可分离性.该方法利用同一个准则将所有类的样本投影到同一个特征空间中,忽略了各类样本分布特征的差异.本文提出类依赖的线性判别方法(Class-Specific LDA,CSLDA),对每一类样本寻找最优的投影矩阵,使得投影后能够更好地把该类样本与所有其他类的样本尽可能分开,并将该方法与经验核相结合,得到经验核空间中类依赖的线性判别分析.在人工数据集和UCI数据集上的实验结果表明,在输入空间和经验核空间里均有CSLDA特征提取后的识别率高于LDA. 相似文献
10.
线性判别分析(LDA)是模式识别领域的一个经典方法,但是LDA难以克服小样本问题。针对LDA的小样本问题,提出一种双曲余弦矩阵鉴别分析方法(HCDA)。该方法首先给出了双曲余弦矩阵函数的定义及其特征系统,再利用双曲余弦矩阵函数特征系统的特点,将其引入Fisher准则中进行特征提取。HCDA有两方面的优势:a)避免了小样本问题,可以提取更多的鉴别信息;b)HCDA方法隐含了一个非线性映射。该映射具有扩大样本间距离的作用,并且对不同类别样本间距离的扩大尺度要大于同类别样本间距离的扩大尺度,从而更有利于模式分类。在手写数字库、手写字母图像库和Georgia Tech人脸图像库上的实验结果表明,相对于具有代表性的解决LDA小样本问题的方法,HCDA具有更好的识别性能。 相似文献
11.
Cancer classification is one of the major applications of the microarray technology. When standard machine learning techniques are applied for cancer classification, they face the small sample size (SSS) problem of gene expression data. The SSS problem is inherited from large dimensionality of the feature space (due to large number of genes) compared to the small number of samples available. In order to overcome the SSS problem, the dimensionality of the feature space is reduced either through feature selection or through feature extraction. Linear discriminant analysis (LDA) is a well-known technique for feature extraction-based dimensionality reduction. However, this technique cannot be applied for cancer classification because of the singularity of the within-class scatter matrix due to the SSS problem. In this paper, we use Gradient LDA technique which avoids the singularity problem associated with the within-class scatter matrix and shown its usefulness for cancer classification. The technique is applied on three gene expression datasets; namely, acute leukemia, small round blue-cell tumour (SRBCT) and lung adenocarcinoma. This technique achieves lower misclassification error as compared to several other previous techniques. 相似文献
12.
Xiaoning Song Jingyu Yang Xiaojun Wu Xibei Yang 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2011,15(2):281-293
Linear discriminant analysis (LDA) is one of the most effective feature extraction methods in statistical pattern recognition,
which extracts the discriminant features by maximizing the so-called Fisher’s criterion that is defined as the ratio of between-class
scatter matrix to within-class scatter matrix. However, classification of high-dimensional statistical data is usually not
amenable to standard pattern recognition techniques because of an underlying small sample size (SSS) problem. A popular approach
to the SSS problem is the removal of non-informative features via subspace-based decomposition techniques. Motivated by this
viewpoint, many elaborate subspace decomposition methods including Fisherface, direct LDA (D-LDA), complete PCA plus LDA (C-LDA),
random discriminant analysis (RDA) and multilinear discriminant analysis (MDA), etc., have been developed, especially in the
context of face recognition. Nevertheless, how to search a set of complete optimal subspaces for discriminant analysis is
still a hot topic of research in area of LDA. In this paper, we propose a novel discriminant criterion, called optimal symmetrical
null space (OSNS) criterion that can be used to compute the Fisher’s maximal discriminant criterion combined with the minimal
one. Meanwhile, by the reformed criterion, the complete symmetrical subspaces based on the within-class and between-class
scatter matrices are constructed, respectively. Different from the traditional subspace learning criterion that derives only
one principal subspace, in our approach two null subspaces and their orthogonal complements were all obtained through the
optimization of OSNS criterion. Therefore, the algorithm based on OSNS has the potential to outperform the traditional LDA
algorithms, especially in the cases of small sample size. Experimental results conducted on the ORL, FERET, XM2VTS and NUST603
face image databases demonstrate the effectiveness of the proposed method. 相似文献
13.
最大散度差鉴别分析及人脸识别 总被引:16,自引:3,他引:13
传统的Fisher线性鉴别分析(LDA)在人脸等高维图像识别应用中不可避免地遇到小样本问题。提出一种基于散度差准则的鉴别分析方法。与LDA方法不同的是,该方法利用样本模式的类间散布与类内散布之差而不是它们的比作为鉴别准则,这样,从根本上避免了类内散布矩阵奇异带来的困难。在ORL人脸数据库和AR人脸数据库上的实验结果验证算法的有效性。 相似文献
14.
基于对称线性判别分析算法的人脸识别 总被引:1,自引:0,他引:1
小样本问题的存在使得类内离散度矩阵为奇异阵,因此求解线性判别分析(LDA)算法的广义特征方程存在病态奇异问题。为解决此问题,在已有算法的基础上,引入镜像图像来扩大样本容量,并采用零空间的方法求得Fisher准则函数的最优解。通过在ORL和Yale标准人脸库上的实验结果表明,人脸识别效果优于传统LDA方法、独立成分分析(ICA)方法以及二维对称主成分分析(2DSPCA)方法。 相似文献
15.
Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) techniques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average-case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimizing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximizing the ratio of worst-case between-class scatter to average-case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning problem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our framework and be solved in the same way. 相似文献
16.
The purpose of conventional linear discriminant analysis (LDA) is to find an orientation which projects high dimensional feature
vectors of different classes to a more manageable low dimensional space in the most discriminative way for classification.
The LDA technique utilizes an eigenvalue decomposition (EVD) method to find such an orientation. This computation is usually
adversely affected by the small sample size problem. In this paper we have presented a new direct LDA method (called gradient
LDA) for computing the orientation especially for small sample size problem. The gradient descent based method is used for
this purpose. It also avoids discarding the null space of within-class scatter matrix and between-class scatter matrix which
may have discriminative information useful for classification. 相似文献
17.
The feature extraction is an important preprocessing step of the classification procedure particularly in high-dimensional data with limited number of training samples. Conventional supervised feature extraction methods, for example, linear discriminant analysis (LDA), generalized discriminant analysis, and non-parametric weighted feature extraction ones, need to calculate scatter matrices. In these methods, within-class and between-class scatter matrices are used to formulate the criterion of class separability. Because of the limited number of training samples, the accurate estimation of these matrices is not possible. So the classification accuracy of these methods falls in a small sample size situation. To cope with this problem, a new supervised feature extraction method namely, feature extraction using attraction points (FEUAP) has been recently proposed in which no statistical moments are used. Thus, it works well using limited training samples. To take advantage of this method and LDA one, this article combines them by a dyadic scheme. In the proposed scheme, the similar classes are grouped hierarchically by the k-means algorithm so that a tree with some nodes is constructed. Then the class of each pixel is determined from this scheme. To determine the class of each pixel, depending on the node of the tree, we use FEUAP or LDA for a limited or large number of training samples, respectively. The experimental results demonstrate the better performance of the proposed hybrid method in comparison with other supervised feature extraction methods in a small sample size situation. 相似文献
18.
Hyun-Chul Kim Author VitaeDaijin KimAuthor Vitae Sung Yang Bang Author Vitae 《Pattern recognition》2003,36(5):1095-1105
Linear discriminant analysis (LDA) is a data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter. While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner. Second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modelling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for synthetic data classification, hand-written digit recognition, and alphabet recognition. 相似文献
19.
Li ZhangAuthor Vitae Wei Da ZhouAuthor VitaePei-Chann ChangAuthor Vitae 《Neurocomputing》2011,74(4):568-574
This paper develops a generalized nonlinear discriminant analysis (GNDA) method and deals with its small sample size (SSS) problems. GNDA is a nonlinear extension of linear discriminant analysis (LDA), while kernel Fisher discriminant analysis (KFDA) can be regarded as a special case of GNDA. In LDA, an under sample problem or a small sample size problem occurs when the sample size is less than the sample dimensionality, which will result in the singularity of the within-class scatter matrix. Due to a high-dimensional nonlinear mapping in GNDA, small sample size problems arise rather frequently. To tackle this issue, this research presents five different schemes for GNDA to solve the SSS problems. Experimental results on real-world data sets show that these schemes for GNDA are very effective in tackling small sample size problems. 相似文献