首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 62 毫秒
1.
A New Algorithm for Generalized Optimal Discriminant Vectors   总被引:6,自引:1,他引:6       下载免费PDF全文
A study has been conducted on the algorithm of solving generalized optimal set of discriminant vectors in this paper.This paper proposes an analytical algorithm of solving generalized optimal set of discriminant vectors theoretically for the first time.A lot of computation time can be saved because all the generalized optimal ests of discriminant vectors can be obtained simultaneously with the proposed algorithm,while it needs no iterative operations .The proposed algorithm can yield a much higher recognition rate.Furthermore,the proposed algorithm overcomes the shortcomings of conventional human face recognition algorithms which were effective for small sample size problems only.These statements are supported by the numerical simulation experiments on facial database of ORL.  相似文献   

2.
提出了一种新的最优判别向量集即统计不相关广义最优判别向量集 ,并给出了计算公式。用ORL人脸数据库进行人脸识别实验 ,结果表明该方法有较强的特征提取能力。  相似文献   

3.
在正交约束条件下,求使Fisher准则判别函数式取极大值的向量,这样的最优判别向量就是F-S最优判别向量集。基于Fisher判别准则函数式,提出了一种无约束的最优判别矢量集,并给出了求解算法。另外,当训练样本矢量数小于样本矢量维数(即小样本问题),类内散布矩阵奇异,为了使它非奇异,采取对样本进行降维的措施,那维数至少要降到多少维才能确保它非奇异,给出了计算公式。实验结果表明鉴别矢量集有良好的分类能力。  相似文献   

4.
The algorithm and the theorem of uncorrelated optimal discriminant vectors (UODV) were proposed by Jin. In this paper, we present new improvements to Jin's method, which include an improved approach for the original algorithm and a generalized theorem for UODV. Experimental results prove that our approach is superior to the original in the recognition rate.  相似文献   

5.
This paper develops a new image feature extraction and recognition method coined two-dimensional linear discriminant analysis (2DLDA). 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image. Also, 2DLDA suggests a feature selection strategy to select the most discriminative features from the corner. 2DLDA is tested and evaluated using the AT&T face database. The experimental results show 2DLDA is more effective and computationally more efficient than the current LDA algorithms for face feature extraction and recognition.  相似文献   

6.
鉴于广义最佳临别矢量集是Foley-Sammon最佳鉴别矢量集的一种推广,给出了广义最佳鉴别矢量的定义,并从理论上对已有的求解广义最佳鉴别矢量集的算法作了分析,指出了其中的不足之处,并给出了一种改进的算法,将此方法用于人脸识别,结果显示,新方法比已有的方法更有效。  相似文献   

7.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

8.
This paper proposes a new discriminant analysis with orthonormal coordinate axes of the feature space. In general, the number of coordinate axes of the feature space in the traditional discriminant analysis depends on the number of pattern classes. Therefore, the discriminatory capability of the feature space is limited considerably. The new discriminant analysis solves this problem completely. In addition, it is more powerful than the traditional one in so far as the discriminatory power and the mean error probability for coordinate axes are concerned. This is also shown by a numerical example.  相似文献   

9.
This work proposes a method to decompose the kernel within-class eigenspace into two subspaces: a reliable subspace spanned mainly by the facial variation and an unreliable subspace due to limited number of training samples. A weighting function is proposed to circumvent undue scaling of eigenvectors corresponding to the unreliable small and zero eigenvalues. Eigenfeatures are then extracted by the discriminant evaluation in the whole kernel space. These efforts facilitate a discriminative and stable low-dimensional feature representation of the face image. Experimental results on FERET, ORL and GT databases show that our approach consistently outperforms other kernel based face recognition methods.
Alex KotEmail:
  相似文献   

10.
An improved discriminative common vectors and support vector machine based face recognition approach is proposed in this paper. The discriminative common vectors (DCV) algorithm is a recently addressed discriminant method, which shows better face recognition effects than some commonly used linear discriminant algorithms. The DCV is based on a variation of Fisher’s Linear Discriminant Analysis for the small sample size case. However, for multiclass problem, the Fisher criterion is clearly suboptimal. We design an improved discriminative common vector by adjustment for the Fisher criterion that can estimate the within-class and between-class scatter matrices more accurately for classification purposes. Then we employ support vector machine as the classifier due to its higher classification and higher generalization. Testing on two public large face database: ORL and AR database, the experimental results demonstrate that the proposed method is an effective face recognition approach, which outperforms several representative recognition methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号