首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A reformative kernel Fisher discriminant method is proposed, which is directly derived from the naive kernel Fisher discriminant analysis with superiority in classification efficiency. In the novel method only a part of training patterns, called “significant nodes”, are necessary to be adopted in classifying one test pattern. A recursive algorithm for selecting “significant nodes”, which is the key of the novel method, is presented in detail. The experiment on benchmarks shows that the novel method is effective and much efficient in classifying.  相似文献   

2.
提出一种非线性分类3-法——基于非线性映射的Fisher判别分析(NM-FDA).首先提取基向量;然后采用Nystrom方法,以基向量为训练样本.将形式未知的非线性映射近似表达为已知形式的非线性映射,这种近似的非线性映射将变量由非线性的输入空间转换到线性的特征子空澡;最后对映射数据进行线性Fisher判别分析.实验采用7组标准数据集,结果显示NM-FDA具有较强的分类能力.  相似文献   

3.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

4.
An improved manifold learning method, called enhanced semi-supervised local Fisher discriminant analysis (ESELF), for face recognition is proposed. Motivated by the fact that statistically uncorrelated and parameter-free are two desirable and promising characteristics for dimension reduction, a new difference-based optimization objective function with unlabeled samples has been designed. The proposed method preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution and it can be computed based on eigen decomposition. Experiments on synthetic data and AT&T, Yale and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The experimental results and comparisons demonstrate the effectiveness of the proposed method.  相似文献   

5.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

6.
This work proposes a method to decompose the kernel within-class eigenspace into two subspaces: a reliable subspace spanned mainly by the facial variation and an unreliable subspace due to limited number of training samples. A weighting function is proposed to circumvent undue scaling of eigenvectors corresponding to the unreliable small and zero eigenvalues. Eigenfeatures are then extracted by the discriminant evaluation in the whole kernel space. These efforts facilitate a discriminative and stable low-dimensional feature representation of the face image. Experimental results on FERET, ORL and GT databases show that our approach consistently outperforms other kernel based face recognition methods.
Alex KotEmail:
  相似文献   

7.
Variable selection serves a dual purpose in statistical classification problems: it enables one to identify the input variables which separate the groups well, and a classification rule based on these variables frequently has a lower error rate than the rule based on all the input variables. Kernel Fisher discriminant analysis (KFDA) is a recently proposed powerful classification procedure, frequently applied in cases characterised by large numbers of input variables. The important problem of eliminating redundant input variables before implementing KFDA is addressed in this paper. A backward elimination approach is recommended, and two criteria which can be used for recursive elimination of input variables are proposed and investigated. Their performance is evaluated on several data sets and in a simulation study.  相似文献   

8.
提出了一种核Fisher鉴别分析方法优化方案,并分别给出了解决两类分类和解决多于两类的分类问题的算法,该方案具有明显的分类效率上的优势。在这种方案的实现中,首先从总体训练样本中选择出“显著”训练样本,对测试样本的分类只依赖于测试样本与“显著”训练样本之间的核函数。还设计出了一种选择“显著”训练样本的递归算法,以降低算法的计算复杂度。将该算法应用于人脸图象数据库与“基准”数据集,得到了很好的实验效果。  相似文献   

9.
In this paper, the method of kernel Fisher discriminant (KFD) is analyzed and its nature is revealed, i.e., KFD is equivalent to kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). Based on this result, a more transparent KFD algorithm is proposed. That is, KPCA is first performed and then LDA is used for a second feature extraction in the KPCA-transformed space. Finally, the effectiveness of the proposed algorithm is verified using the CENPARMI handwritten numeral database.  相似文献   

10.
We reformulate the Quadratic Programming Feature Selection (QPFS) method in a Kernel space to obtain a vector which maximizes the quadratic objective function of QPFS. We demonstrate that the vector obtained by Kernel Quadratic Programming Feature Selection is equivalent to the Kernel Fisher vector and, therefore, a new interpretation of the Kernel Fisher discriminant analysis is given which provides some computational advantages for highly unbalanced datasets.  相似文献   

11.
This paper develops a generalized nonlinear discriminant analysis (GNDA) method and deals with its small sample size (SSS) problems. GNDA is a nonlinear extension of linear discriminant analysis (LDA), while kernel Fisher discriminant analysis (KFDA) can be regarded as a special case of GNDA. In LDA, an under sample problem or a small sample size problem occurs when the sample size is less than the sample dimensionality, which will result in the singularity of the within-class scatter matrix. Due to a high-dimensional nonlinear mapping in GNDA, small sample size problems arise rather frequently. To tackle this issue, this research presents five different schemes for GNDA to solve the SSS problems. Experimental results on real-world data sets show that these schemes for GNDA are very effective in tackling small sample size problems.  相似文献   

12.
In this paper, we propose an improved manifold learning method, called uncorrelated local Fisher discriminant analysis (ULFDA), for ear recognition. Motivated by the fact that the features extracted by local Fisher discriminant analysis are statistically correlated, which may result in poor performance for recognition. The aim of ULFDA is to seek a feature submanifold such that the within-manifold scatter is minimized and between-manifold scatter is maximized simultaneously in the embedding space by using a new difference-based optimization objective function. Moreover, we impose an appropriate constraint to make the extracted features statistically uncorrelated. As a result, the proposed algorithm not only derives the optimal and lossless discriminative information, but also guarantees that all extracted features are statistically uncorrelated. Experiments on synthetic data and Spain, USTB-2 and CEID ear databases are performed to demonstrate the effectiveness of the proposed method.  相似文献   

13.
This paper develops a new image feature extraction and recognition method coined two-dimensional linear discriminant analysis (2DLDA). 2DLDA provides a sequentially optimal image compression mechanism, making the discriminant information compact into the up-left corner of the image. Also, 2DLDA suggests a feature selection strategy to select the most discriminative features from the corner. 2DLDA is tested and evaluated using the AT&T face database. The experimental results show 2DLDA is more effective and computationally more efficient than the current LDA algorithms for face feature extraction and recognition.  相似文献   

14.
Small sample size and high computational complexity are two major problems encountered when traditional kernel discriminant analysis methods are applied to high-dimensional pattern classification tasks such as face recognition. In this paper, we introduce a new kernel discriminant learning method, which is able to effectively address the two problems by using regularization and subspace decomposition techniques. Experiments performed on real face databases indicate that the proposed method outperforms, in terms of classification accuracy, existing kernel methods, such as kernel principal component analysis and kernel linear discriminant analysis, at a significantly reduced computational cost.  相似文献   

15.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

16.
Fault detection and diagnosis (FDD) in chemical process systems is an important tool for effective process monitoring to ensure the safety of a process. Multi-scale classification offers various advantages for monitoring chemical processes generally driven by events in different time and frequency domains. However, there are issues when dealing with highly interrelated, complex, and noisy databases with large dimensionality. Therefore, a new method for the FDD framework is proposed based on wavelet analysis, kernel Fisher discriminant analysis (KFDA), and support vector machine (SVM) classifiers. The main objective of this work was to combine the advantages of these tools to enhance the performance of the diagnosis on a chemical process system. Initially, a discrete wavelet transform (DWT) was applied to extract the dynamics of the process at different scales. The wavelet coefficients obtained during the analysis were reconstructed using the inverse discrete wavelet transform (IDWT) method, which were then fed into the KFDA to produce discriminant vectors. Finally, the discriminant vectors were used as inputs for the SVM classification task. The SVM classifiers were utilized to classify the feature sets extracted by the proposed method. The performance of the proposed multi-scale KFDA-SVM method for fault classification and diagnosis was analysed and compared using a simulated Tennessee Eastman process as a benchmark. The results showed the improvements of the proposed multiscale KFDA-SVM framework with an average 96.79% of classification accuracy over the multi-scale KFDA-GMM (84.94%), and the established independent component analysis-SVM method (95.78%) of the faults in the Tennessee Eastman process.  相似文献   

17.
目前线性鉴别分析以Fisher准则或是逐对类加权Fisher准则为依据,但前者不能限制离群类,后者计算量大,鉴于此,提出一种改进Fisher准则用于线性鉴别分析。回顾了Fisher准则和逐对类加权Fisher准则,指出其中问题产生的根本原因。提出类距离和类离群程度的定义,以类距离为依据判定各类离群程度,以类离群程度为参数赋予各类权值,重新计算总体类均值和类间离散度矩阵,以得到限制离群类、突出常规类的改进Fisher准则。这种改进Fisher准则计算简单,能有效限制离群类。  相似文献   

18.
复杂化工过程常被多种类型的故障损坏,正常的训练数据无法建立准确的操作模型。为了提高复杂化工过程中故障的检测和分类能力,传统无监督Fisher判别分析(Fisher Discriminant Analysis,FDA)算法无法在多模态故障数据中的应用,本文提出基于局部Fisher判别分析(Local Fisher Discriminant Analysis,LFDA)的故障诊断方法。首先计算训练数据的局部类内和类间离散度矩阵,寻找LFDA的投影方向;其次把训练数据和测试数据向投影向量上投影,提取特征向量;最后计算特征向量间的欧氏距离,运用KNN分类器进行分类。把提出的LFDA方法应用到Tennessee Eastman(TE)过程,监控结果表明,LFDA的效果好于FDA和核Fisher判别分析(Kernel Fisher Discriminant Analysis,KFDA),说明LFDA方法在分类及检测不同类的故障方面具有高准确性及高灵敏度的优势。  相似文献   

19.
黄可坤 《计算机应用》2013,33(6):1723-1726
边界Fisher分析(MFA)应用于人脸识别时会遇到小样本问题,如果用主成分分析进行降维来处理该问题,则会丢失一些对分类有益的分量;如果把MFA的目标函数用最大间距准则代替,则较难得到最佳参数。提出了一种正则化的MFA方法,该方法用一个较小的数乘上单位阵构造正则项,然后加到MFA的类内散度矩阵中,使得所得矩阵是可逆的,并且不会丢失对分类有益的分量,也容易确定其中的参数。因为一个样本通常能被少数几个距离比较近的同类样本很好地线性表达,在正则化MFA降维之后结合使用稀疏表示分类算法进一步提高识别率。在FERET和AR数据库上的实验表明,对比一些经典的降维方法,使用该方法能显著提高识别率。  相似文献   

20.
一种基于空间变换的核Fisher鉴别分析   总被引:1,自引:1,他引:1  
陈才扣  高林  杨静宇 《计算机工程》2005,31(8):17-18,60
引入空间变换的思相想,提出了一种基于空间变换的核Fisher鉴别分析,与KFDA不同的是,该方法只需在一个较低维的空间内执行,从而较大幅度地降低了求解最优鉴别矢量集的计算量,提高了计算速度,在ORL标准人脸库上的试验结果验证了所提方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号