首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonlinear discriminant analysis may be transformed into the form of kernel-based discriminant analysis. Thus, the corresponding discriminant direction can be solved by linear equations. From the view of feature space, the nonlinear discriminant analysis is still a linear method, and it is provable that in feature space the method is equivalent to Fisher discriminant analysis. We consider that one linear combination of parts of training samples, called “significant nodes”, can replace the total training samples to express the corresponding discriminant vector in feature space to some extent. In this paper, an efficient algorithm is proposed to determine “significant nodes” one by one. The principle of determining “significant nodes” is simple and reasonable, and the consequent algorithm can be carried out with acceptable computation cost. Depending on the kernel functions between test samples and all “significant nodes”, classification can be implemented. The proposed method is called fast kernel-based nonlinear method (FKNM). It is noticeable that the number of “significant nodes” may be much smaller than that of the total training samples. As a result, for two-class classification problems, the FKNM will be much more efficient than the naive kernel-based nonlinear method (NKNM). The FKNM can be also applied to multi-class via two approaches: one-against-the-rest and one-against-one. Although there is a view that one-against-one is superior to one-against-the-rest in classification efficiency, it seems that for the FKNM one-against-the-rest is more efficient than one-against-one. Experiments on benchmark and real datasets illustrate that, for two-class and multi-class classifications, the FKNM is effective, feasible and much efficient.  相似文献   

2.
Block-wise 2D kernel PCA/LDA for face recognition   总被引:1,自引:0,他引:1  
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.  相似文献   

3.
由于雷达目标及其所处环境的复杂性,导致不同目标之间的关系往往是非线性的.研究基于核的非线性方法,并将其应用于雷达目标一维距离像识别.核Fisher判别分析(KFDA)是一种抽取非线性特征的最有效方法之一,但它往往会面临小样本问题.针对此问题,给出一种null-KFDA方法,对距离像进行特征提取.然后,采用一种新的核非线性分类器——KNR(kernel-based nonlinear representor),对所提取的特征进行分类.对3种飞机的实测距离像进行实验,结果验证了null-KFDA的有效性.此外,与非线性支持向量机(SVM)和径向基函数神经网络(RBFNN)相比,KNR分类器具有更优的识别性能.  相似文献   

4.
This paper examines the applicability of some learning techniques for speech recognition, more precisely, for the classification of phonemes represented by a particular segment model. The methods compared were the IB1 algorithm (TiMBL), ID3 tree learning (C4.5), oblique tree learning (OC1), artificial neural nets (ANN), and Gaussian mixture modeling (GMM), and, as a reference, a hidden Markov model (HMM) recognizer was also trained on the same corpus. Before feeding them into the learners, the segmental features were additionally transformed using either linear discriminant analysis (LDA), principal component analysis (PCA), or independent component analysis (ICA). Each learner was tested with each transformation in order to find the best combination. Furthermore, we experimented with several feature sets, such as filter-bank energies, mel-frequency cepstral coefficients (MFCC), and gravity centers. We found LDA helped all the learners, in several cases quite considerably. PCA was beneficial only for some of the algorithms, and ICA improved the results quite rarely and was bad for certain learning methods. From the learning viewpoint, ANN was the most effective and attained the same results independently of the transformation applied. GMM behaved worse, which shows the advantages of discriminative over generative learning. TiMBL produced reasonable results; C4.5 and OC1 could not compete, no matter what transformation was tried.  相似文献   

5.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

6.
KPCA方法过程研究与应用   总被引:2,自引:1,他引:1       下载免费PDF全文
给出一种基于核函数的主成分分析方法,它主要用来解决大规模非线性数据的特征提取问题。文中给出了简化的协方差矩阵的计算方法与推导过程,还给出了KPCA方法的详细推导过程。最后使用核主成分分析的方法分别对线性与非线性分布的数据进行了分析,取得了比传统主成分分析方法更好的结果。  相似文献   

7.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

8.
We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called "Locally Linear Discriminant Analysis" (LLDA). The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the between-class covariance while minimizing the within-class covariance. In face recognition, linear discriminant analysis (LIDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradient-based learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a local-maxima problem. The transformation functions facilitate robust face recognition in a low-dimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data.  相似文献   

9.
Large-margin methods, such as support vector machines (SVMs), have been very successful in classification problems. Recently, maximum margin discriminant analysis (MMDA) was proposed that extends the large-margin idea to feature extraction. It often outperforms traditional methods such as kernel principal component analysis (KPCA) and kernel Fisher discriminant analysis (KFD). However, as in the SVM, its time complexity is cubic in the number of training points m, and is thus computationally inefficient on massive data sets. In this paper, we propose an (1+epsilon)(2)-approximation algorithm for obtaining the MMDA features by extending the core vector machine. The resultant time complexity is only linear in m, while its space complexity is independent of m. Extensive comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by over an order of magnitude.  相似文献   

10.
基于二维投影特征提取的人脸识别算法   总被引:2,自引:0,他引:2       下载免费PDF全文
基于线性投影的方法是目前人脸识别领域中重要的主流方法之一,在近年中得到了广泛的关注,取得了显著的发展。其中,基于一维线性投影的方法包括特征脸方法和Fisher脸方法等;基于二维线性投影的方法包括二维主成分分析和二维线性判别分析,以及它们的一系列拓展算法等。在此基础上,给出了一种基于二维矩阵的特征提取新方法。通过在ORL标准人脸库的实验表明,该算法与现有的方法相比在识别率和识别效率方面都有一定程度的提高,取得了比较理想的效果。  相似文献   

11.
We present a new linear discriminant analysis method based on information theory, where the mutual information between linearly transformed input data and the class labels is maximized. First, we introduce a kernel-based estimate of mutual information with a variable kernel size. Furthermore, we devise a learning algorithm that maximizes the mutual information w.r.t. the linear transformation. Two experiments are conducted: the first one uses a toy problem to visualize and compare the transformation vectors in the original input space; the second one evaluates the performance of the method for classification by employing cross-validation tests on four datasets from the UCI repository. Various classifiers are investigated. Our results show that this method can significantly boost class separability over conventional methods, especially for nonlinear classification.  相似文献   

12.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

13.
In this article, the kernel-based methods explained by a graph embedding framework are analyzed and their nature is revealed, i.e. any kernel-based method in a graph embedding framework is equivalent to kernel principal component analysis plus its corresponding linear one. Based on this result, the authors propose a complete kernel-based algorithms framework. Any algorithm in our framework makes full use of two kinds of discriminant information, irregular and regular. The proposed algorithms framework is tested and evaluated using the ORL, Yale and FERET face databases. The experiment results demonstrate the effectiveness of our proposed algorithms framework.  相似文献   

14.
人脸的性别分类   总被引:7,自引:0,他引:7  
人脸的性别分类是指根据人脸的图像判别其性别的模式识别问题.系统地研究了不同的特征提取方法和分类方法在性别分类问题上的性能,其中包括主分量分析(PCA)、Fishel线性鉴别分析(FLD)、最佳特征提取、Adaboost算法、支持向量机(SVM).给出了在9姿态人脸库、FERET人脸库和一个网络图片人脸库上的对比实验结果.实验表明人脸中的性别信息集中存在于某个子空间中,因此,在分类前对样本进行适当的压缩降维不但不会明显降低分类器的性能,而且可以大大减少分类的时间开销.最后介绍了将性别分类器与自动人脸检测和特征提取平台集成起来的基于人脸图像的性别判别系统.  相似文献   

15.
To address two problems, namely nonlinear problem and singularity problem, of linear discriminant analysis (LDA) approach in face recognition, this paper proposes a novel kernel machine-based rank-lifting regularized discriminant analysis (KRLRDA) method. A rank-lifting theorem is first proven using linear algebraic theory. Combining the rank-lifting strategy with three-to-one regularization technique, the complete regularized methodology is developed on the within-class scatter matrix. The proposed regularized scheme not only adjusts the projection directions but tunes their corresponding weights as well. Moreover, it is shown that the final regularized within-class scatter matrix approaches to the original one as the regularized parameter tends to zero. Two public available databases, namely FERET and CMU PIE face databases, are selected for evaluations. Compared with some existing kernel-based LDA methods, the proposed KRLRDA approach gives superior performance.  相似文献   

16.
主成分分析算法(PCA)和线性鉴别分析算法(LDA)被广泛用于人脸识别技术中,但是PCA由于其计算复杂度高,致使人脸识别的实时性达不到要求.线性鉴别分析算法存在"小样本"和"边缘类"问题,降低了人脸识别的准确性.针对上述问题,提出使用二维主成分分析法(2DPCA)与改进的线性鉴别分析法相融合的方法.二维主成分分析法提取...  相似文献   

17.
Foley-Sammon optimal discriminant vectors using kernel approach   总被引:4,自引:0,他引:4  
A new nonlinear feature extraction method called kernel Foley-Sammon optimal discriminant vectors (KFSODVs) is presented in this paper. This new method extends the well-known Foley-Sammon optimal discriminant vectors (FSODVs) from linear domain to a nonlinear domain via the kernel trick that has been used in support vector machine (SVM) and other commonly used kernel-based learning algorithms. The proposed method also provides an effective technique to solve the so-called small sample size (SSS) problem which exists in many classification problems such as face recognition. We give the derivation of KFSODV and conduct experiments on both simulated and real data sets to confirm that the KFSODV method is superior to the previous commonly used kernel-based learning algorithms in terms of the performance of discrimination.  相似文献   

18.
基于PCA-LDA-SVM的多普勒雷达车型识别算法   总被引:1,自引:1,他引:0  
车辆检测和车型识别是智能交通系统(Intelligent transportation system,ITS)中的一个重要方面,而目标识别是低分辨率雷达领域的一个难点.该文提出一种用多普勒雷达进行车型识别的方法,把车辆建模成包含多个散射中心的目标体,散射中心与雷达的距离与频谱能量有关,因此同一目标的频谱变化反映了该目标长高等轮廓特征.然后将有效的频谱特征结合主成分分析(Principal component and analysis,PCA)和线性判别分析(Linear discriminant analysis,LDA)进行降维,再利用支持向量机(Support vector machine,SVM)等分类器实现分型.文章对不同识别算法交叉验证的实验结果进行比较,表明基于PCA-LDA-SVM的车型识别算法效果理想,有广泛的应用前景.  相似文献   

19.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

20.
This paper provides a unifying view of three discriminant linear feature extraction methods: linear discriminant analysis, heteroscedastic discriminant analysis and maximization of mutual information. We propose a model-independent reformulation of the criteria related to these three methods that stresses their similarities and elucidates their differences. Based on assumptions for the probability distribution of the classification data, we obtain sufficient conditions under which two or more of the above criteria coincide. It is shown that these conditions also suffice for Bayes optimality of the criteria. Our approach results in an information-theoretic derivation of linear discriminant analysis and heteroscedastic discriminant analysis. Finally, regarding linear discriminant analysis, we discuss its relation to multidimensional independent component analysis and derive suboptimality bounds based on information theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号