首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
结合零空间法和F-LDA的人脸识别算法   总被引:2,自引:0,他引:2  
王增锋  王汇源  冷严 《计算机应用》2005,25(11):2586-2588
线性判别分析(LDA)是一种常用的线性特征提取方法。传统LDA应用于人脸识别时主要存在两个问题:1)小样本问题,即由于训练样本不足引起矩阵奇异; 2)优化准则函数并不直接与识别率相关。提出了一种新的能同时解决以上两个问题的基于LDA的人脸识别算法。首先,通过重新定义样本的类内散布矩阵和类间散布矩阵,提出了一种新的零空间法。然后把这种新的零空间法与F LDA(Fractional LDA)算法相结合,得到一种对人脸识别更有效的特征提取方法。实验结果表明,这种新算法具有较高的识别率。  相似文献   

2.
基于集成学习的规范化LDA人脸识别   总被引:1,自引:1,他引:0  
针对人脸识别问题中经常面临的“小样本”问题,在规范化的LDA算法的基础上加以改进,并结合集成学习的方法,利用Adaboost算法,在每一次的迭代过程中引进一个加权函数对难以分离的样本增加权重。增加分类器之间的差异度,从而提高样本在新的特征空间中的可分离性,将识别率提高至98.5%。通过ORL数据库的大量实验表明,该算法比传统算法有更好的性能。  相似文献   

3.
刘敬 《计算机科学》2011,38(12):274-277
针对高光谱遥感影像的降维问题,提出一种高光谱影像地物分类方法:direct LDA子空间法。先采用直接线性判别分析(direct linear discriminant analysis, direct LDA)进行特征提取,然后在特征子空间中采用最短距离分类器进行地物分类。机载可见光/红外成像光谱仪(airborne visible/infrared imaging spectrometer,AVIRIS)的高光谱影像识别结果表明,该方法相比LDA子空间法和原空间法,可显著降低数据维数,提高识别率。  相似文献   

4.
基于自动子空间划分的高光谱数据特征提取   总被引:7,自引:0,他引:7  
针对遥感高光谱图像数据量大、维数高的特点,提出了一种自动子空间划分方法用于高光谱图像数据量减小处理。该方法主要包括3个处理步骤:数据空间划分,子空间主成分分析和基于类别可分性准则的特征选择。该方法充分利用了高光谱图像各波段数据之间的局部相关性,将整个数据划分为若干个具有较强相关性的独立子空间,然后在子空间内利用主成分分析进行特征提取,根据各类地物间的类别可分性选择有效特征,最后利用地物分类来验证该方法的有效性。实验结果表明,该方法能够有效地实现高光谱图像数据维数减小和特征提取,同现有的自适应子空间分解方法和分段主成分变换方法相比,该方法所提取的特征用于分类时能获得较好的分类精度。利用该方法进行处理,当高光谱数据维数降低了90%时,9类地物分类实验的总体分类精度可以达到80.2%。  相似文献   

5.
Dimensionality reduction is the process of mapping high-dimension patterns to a lower dimension subspace. When done prior to classification, estimates obtained in the lower dimension subspace are more reliable. For some classifiers, there is also an improvement in performance due to the removal of the diluting effect of redundant information. A majority of the present approaches to dimensionality reduction are based on scatter matrices or other statistics of the data which do not directly correlate to classification accuracy. The optimality criteria of choice for the purposes of classification is the Bayes error. Usually however, Bayes error is difficult to express analytically. We propose an optimality criteria based on an approximation of the Bayes error and use it to formulate a linear and a nonlinear method of dimensionality reduction. The nonlinear method we propose, relies on using a multilayered perceptron which produces as output the lower dimensional representation. It thus differs from autoassociative like multilayered perceptrons which have been proposed and used for dimensionality reduction. Our results show that the nonlinear method is, as anticipated, superior to the linear method in that it can perform unfolding of a nonlinear manifold. In addition, the nonlinear method we propose provides substantially better lower dimension representation (for classification purposes) than Fisher's linear discriminant (FLD) and two other nonlinear methods of dimensionality reduction that are often used.  相似文献   

6.
The random subspace method (RSM) is one of the ensemble learning algorithms widely used in pattern classification applications. RSM has the advantages of small error rate and improved noise insensitivity due to ensemble construction of the base‐learners. However, randomness may cause a reduction of the final ensemble decision performance because of contributions of classifiers trained by subsets with low class separability. In this study, we present a new and improved version of the RSM by introducing a weighting factor into the combination phase. One of the class separability criteria, J3, is used as a weighting factor to improve the classification performance and eliminate the drawbacks of the standard RSM algorithm. The randomly selected subsets are quantified by computing their J3 measure to determine voting weights in the model combination phase, assigning lower voting weight to classifiers trained by subsets with poor class separability. Two models are presented including J3‐weighted RSM and optimized J3 weighted RSM. In J3 weighted RSM, computed weighting values are directly multiplied by class assignment posteriors, whereas in optimized J3 weighted RSM, computed weighting values are optimized by a pattern search algorithm before multiplying by posteriors. Both models are shown to provide better error rates at lower subset dimensionality.  相似文献   

7.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

8.
The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.  相似文献   

9.
分别用降秩线性判别分析(RRLDA)、降秩二次判别分析(RRQDA)和主成分分析+线性判别分析(PCA+LDA)三种模型对数据进行了分析,并在元音测试数据集上进行了测试.分别画出了这三种模型的误分类率曲线,画出了RRLDA和PCA+LDA分别降至二维后的最优分类面.从实验结果中可以发现,RRLDA模型的实验结果优于PC...  相似文献   

10.
陈斌  张连海  牛铜  屈丹  李弼程 《自动化学报》2014,40(6):1208-1215
提出了一种基于最小分类错误(Minimum classification error,MCE)准则的线性判别分析方法(Linear discriminant analysis,LDA),并将其应用到连续语音识别中的特征变换.该方法采用非参数核密度估计方法进行数据概率分布估计;根据得到的概率分布,在最小分类错误准则下,采用基于梯度下降的线性搜索算法求解判别分析变换矩阵.利用判别分析变换矩阵对相邻帧梅尔滤波器组输出拼接的超矢量变换降维,得到时频特征.实验结果表明,与传统的MFCC特征相比,经过本文判别分析提取的时频特征其识别准确率提高了1.41%,相比于HLDA(Heteroscedastic LDA)和近似成对经验正确率准则(Approximate pairwise empirical accuracy criterion,aPEAC)判别分析方法,识别准确率分别提高了1.14%和0.83%.  相似文献   

11.
针对处理高维度属性的大数据的属性约减方法进行了研究。发现属性选择和子空间学习是属性约简的两种常见方法,其中属性选择具有很好的解释性,子空间学习的分类效果优于属性选择。而往往这两种方法是各自独立进行应用。为此,提出了综合这两种属性约简方法,设计出新的属性选择方法。即利用子空间学习的两种技术(即线性判别分析(LDA)和局部保持投影(LPP)),考虑数据的全局特性和局部特性,同时设置稀疏正则化因子实现属性选择。基于分类准确率、方差和变异系数等评价指标的实验结果比较,表明该算法相比其它对比算法,能更有效的选取判别属性,并能取得很好的分类效果。  相似文献   

12.
Geometric Mean for Subspace Selection   总被引:2,自引:0,他引:2  
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.  相似文献   

13.
Learning a compact and yet discriminative codebook is an important procedure for local feature-based action recognition. A common procedure involves two independent phases: reducing the dimensionality of local features and then performing clustering. Since the two phases are disconnected, dimensionality reduction does not necessarily capture the dimensions that are greatly helpful for codebook creation. What’s more, some dimensionality reduction techniques such as the principal component analysis do not take class separability into account and thus may not help build an effective codebook. In this paper, we propose the weighted adaptive metric learning (WAML) which integrates the two independent phases into a unified optimization framework. This framework enables to select indispensable and crucial dimensions for building a discriminative codebook. The dimensionality reduction phase in the WAML is optimized for class separability and adaptively adjusts the distance metric to improve the separability of data. In addition, the video word weighting is smoothly incorporated into the WAML to accurately generate video words. Experimental results demonstrate that our approach builds a highly discriminative codebook and achieves comparable results to other state-of-the-art approaches.  相似文献   

14.
刘敬  赵峰  刘逸 《计算机应用》2012,32(4):1025-1029
针对传统线性判别分析(LDA)的子空间倾向于保留大类间距离类对的可分性,而丢弃小类间距离类对的可分性的问题,基于子空间应均衡保留各类对可分性的思想,提出一种新的准则——散度比例(PD)准则。PD准则为各类对子空间散度与原空间散度之比的均值,并推导出最大化PD准则的线性判别分析(PD-LDA)的求解过程。采用PD-LDA对高分辨距离像(HRRP)的幅度谱进行特征提取,基于外场实测数据,分别训练了最小欧氏距离分类器和支持向量机(SVM)分类器,两种分类器的识别结果均表明,PD-LDA相比LDA,可显著降低数据维数并有效提高识别率。  相似文献   

15.
一种LDA与SVM混合的多类分类方法   总被引:2,自引:0,他引:2  
针对决策有向无环图支持向量机(DDAGSVM)需训练大量支持向量机(SVM)和误差积累的问题,提出一种线性判别分析(LDA)与SVM 混合的多类分类算法.首先根据高维样本在低维空间中投影的特点,给出一种优化LDA 分类阈值;然后以优化LDA 对每个二类问题的分类误差作为类间线性可分度,对线性可分度较低的问题采用非线性SVM 加以解决,并以分类误差作为对应二类问题的可分度;最后将可分度作为混合DDAG 分类器的决策依据.实验表明,与DDAGSVM 相比,所提出算法在确保泛化精度的条件下具有更高的训练和分类速度.  相似文献   

16.
We present a modular linear discriminant analysis (LDA) approach for face recognition. A set of observers is trained independently on different regions of frontal faces and each observer projects face images to a lower-dimensional subspace. These lower-dimensional subspaces are computed using LDA methods, including a new algorithm that we refer to as direct, weighted LDA or DW-LDA. DW-LDA combines the advantages of two recent LDA enhancements, namely direct LDA (D-LDA) and weighted pairwise Fisher criteria. Each observer performs recognition independently and the results are combined using a simple sum-rule. Experiments compare the proposed approach to other face recognition methods that employ linear dimensionality reduction. These experiments demonstrate that the modular LDA method performs significantly better than other linear subspace methods. The results also show that D-LDA does not necessarily perform better than the well-known principal component analysis followed by LDA approach. This is an important and significant counterpoint to previously published experiments that used smaller databases. Our experiments also indicate that the new DW-LDA algorithm is an improvement over D-LDA.  相似文献   

17.
Incremental linear discriminant analysis for face recognition.   总被引:3,自引:0,他引:3  
Dimensionality reduction methods have been successfully employed for face recognition. Among the various dimensionality reduction algorithms, linear (Fisher) discriminant analysis (LDA) is one of the popular supervised dimensionality reduction methods, and many LDA-based face recognition algorithms/systems have been reported in the last decade. However, the LDA-based face recognition systems suffer from the scalability problem. To overcome this limitation, an incremental approach is a natural solution. The main difficulty in developing the incremental LDA (ILDA) is to handle the inverse of the within-class scatter matrix. In this paper, based on the generalized singular value decomposition LDA (LDA/GSVD), we develop a new ILDA algorithm called GSVD-ILDA. Different from the existing techniques in which the new projection matrix is found in a restricted subspace, the proposed GSVD-ILDA determines the projection matrix in full space. Extensive experiments are performed to compare the proposed GSVD-ILDA with the LDA/GSVD as well as the existing ILDA methods using the face recognition technology face database and the Carneggie Mellon University Pose, Illumination, and Expression face database. Experimental results show that the proposed GSVD-ILDA algorithm gives the same performance as the LDA/GSVD with much smaller computational complexity. The experimental results also show that the proposed GSVD-ILDA gives better classification performance than the other recently proposed ILDA algorithms.  相似文献   

18.
为了提高大数据挖掘的效率及准确度,文中将稀疏表示和特征加权运用于大数据处理过程中。首先,采用求解线性方程稀疏解的方式对大数据进行特征分类,在稀疏解的求解过程中利用向量的范数将此过程转化为最优化目标函数的求解。在完成特征分类后进行特征提取以降低数据维度,最后充分结合数据在类中的分布情况进行有效加权来实现大数据挖掘。实验结果表明,相比于常见的特征提取和特征加权算法,提出的算法在查全率和查准率方面均呈现出明显优势。  相似文献   

19.
局部线性回归分类器(locality-regularized linear regression classification,LLRC)在人脸识别上表现出了高识别率以及高效性的特点,然而原始特征空间并不能保证LLRC的效率。为了提高LLRC的性能,提出了一种与LLRC相联系的新的降维方法,即面向局部线性回归分类器的判别分析方法(locality-regularized linear regression classification based discriminant analysis,LLRC-DA)。LLRC-DA根据LLRC的决策准则设计目标函数,通过最大化类间局部重构误差并最小化类内局部重构误差来寻找最优的特征子空间。此外,LLRC-DA通过对投影矩阵添加正交约束来消除冗余信息。为了有效地求解投影矩阵,利用优化变量之间的关系,提出了一种新的迹比优化算法。因此LLRC-DA非常适用于LLRC。在FERET和ORL人脸库上进行了实验,实验结果表明LLRC-DA比现有方法更具有优越性。  相似文献   

20.
Most manifold learning algorithms adopt the k nearest neighbors function to construct the adjacency graph. However, severe bias may be introduced in this case if the samples are not uniformly distributed in the ambient space. In this paper a semi-supervised dimensionality reduction method is proposed to alleviate this problem. Based on the notion of local margin, we simultaneously maximize the separability between different classes and estimate the intrinsic geometric structure of the data by both the labeled and unlabeled samples. For high-dimensional data, a discriminant subspace is derived via maximizing the cumulative local margins. Experimental results on high-dimensional classification tasks demonstrate the efficacy of our algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号