首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, the method of kernel Fisher discriminant (KFD) is analyzed and its nature is revealed, i.e., KFD is equivalent to kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). Based on this result, a more transparent KFD algorithm is proposed. That is, KPCA is first performed and then LDA is used for a second feature extraction in the KPCA-transformed space. Finally, the effectiveness of the proposed algorithm is verified using the CENPARMI handwritten numeral database.  相似文献   

2.
核学习机研究   总被引:2,自引:2,他引:2  
该文概述了近年来机器学习研究领域的一个热点问题———核学习机。首先分析了核方法的主要思想,然后着重介绍了几种新近发展的核学习机,包括支持向量机、核的Fisher判别分析等有监督学习算法及核的主分量分析等无监督学习算法,最后讨论了其应用及前景展望。  相似文献   

3.
Bo L  Wang L  Jiao L 《Neural computation》2006,18(4):961-978
Kernel fisher discriminant analysis (KFD) is a successful approach to classification. It is well known that the key challenge in KFD lies in the selection of free parameters such as kernel parameters and regularization parameters. Here we focus on the feature-scaling kernel where each feature individually associates with a scaling factor. A novel algorithm, named FS-KFD, is developed to tune the scaling factors and regularization parameters for the feature-scaling kernel. The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient-descent method and has been demonstrated to be computationally feasible. FS-KFD is motivated by the following two fundamental facts: the leave-one-out error of KFD can be expressed in closed form and the step function can be approximated by a sigmoid function. Empirical comparisons on artificial and benchmark data sets suggest that FS-KFD improves KFD in terms of classification accuracy.  相似文献   

4.
This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Based on this framework, the authors propose a complete kernel Fisher discriminant analysis (CKFD) algorithm. CKFD can be used to carry out discriminant analysis in "double discriminant subspaces." The fact that, it can make full use of two kinds of discriminant information, regular and irregular, makes CKFD a more powerful discriminator. The proposed algorithm was tested and evaluated using the FERET face database and the CENPARMI handwritten numeral database. The experimental results show that CKFD outperforms other KFD algorithms.  相似文献   

5.
Kernel pooled local subspaces for classification.   总被引:1,自引:0,他引:1  
We investigate the use of subspace analysis methods for learning low-dimensional representations for classification. We propose a kernel-pooled local discriminant subspace method and compare it against competing techniques: kernel principal component analysis (KPCA) and generalized discriminant analysis (GDA) in classification problems. We evaluate the classification performance of the nearest-neighbor rule with each subspace representation. The experimental results using several data sets demonstrate the effectiveness and performance superiority of the kernel-pooled subspace method over competing methods such as KPCA and GDA in some classification problems.  相似文献   

6.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

7.
We propose a new criterion for discriminative dimension reduction, max-min distance analysis (MMDA). Given a data set with C classes, represented by homoscedastic Gaussians, MMDA maximizes the minimum pairwise distance of these C classes in the selected low-dimensional subspace. Thus, unlike Fisher's linear discriminant analysis (FLDA) and other popular discriminative dimension reduction criteria, MMDA duly considers the separation of all class pairs. To deal with general case of data distribution, we also extend MMDA to kernel MMDA (KMMDA). Dimension reduction via MMDA/KMMDA leads to a nonsmooth max-min optimization problem with orthonormal constraints. We develop a sequential convex relaxation algorithm to solve it approximately. To evaluate the effectiveness of the proposed criterion and the corresponding algorithm, we conduct classification and data visualization experiments on both synthetic data and real data sets. Experimental results demonstrate the effectiveness of MMDA/KMMDA associated with the proposed optimization algorithm.  相似文献   

8.
一种用于人脸识别的非线性鉴别特征融合方法   总被引:2,自引:0,他引:2  
最近,在人脸等图像识别领域,用于抽取非线性特征的核方法如核Fisher鉴别分析(KFDA)已经取得成功并得到了广泛应用,但现有的核方法都存在这样的问题,即构造特征空间中的核矩阵所耗费的计算量非常大.而且,抽取得到的单类特征往往不能获得到令人满意的识别结果.提出了一种用于人脸识别的非线性鉴别特征融合方法,即首先利用小波变换和奇异值分解对原始输入样本进行降雏变换,抽取同一样本空间的两类特征,然后利用复向量将这两类特征组合在一起,构成一复特征向量空间,最后在该空间中进行最优鉴别特征抽取.在ORL标准人脸库上的试验结果表明所提方法不仅在识别性能上优于现有的核Fisher鉴别分析方法,而且,在ORL人脸库上的特征抽取速度提高了近8倍.  相似文献   

9.
提出了一种新的以Bhattacharyya距离为准则的核空间特征提取算法.该算法的核心思想是把样本非线性映射到高维核空间.在核空间中寻找一组最优特征向量,然后把样本线性映射到低维特征空间,使类别间的Bhattacharyya距离最大。从而保证Bayes分类误差上界最小.采用核函数技术,把特征提取问题转化为一个QP(Quadratic Programming)优化问题.保证了算法的全局收敛性和快速性.此算法具有两个优点:(1)该算法提取的特征对数据分类来说更有效;(2)对于给定的模式分类问题,算法可以预测出在不损失分类精度情况下所必须的特征向量数目的上界,并能够提取出分类有效特征.实验结果表明,该算法的性能与理论分析的结论相吻合,优于目前常用的特征提取算法.  相似文献   

10.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

11.
一种新的核线性鉴别分析算法及其在人脸识别上的应用   总被引:1,自引:0,他引:1  
基于核策略的核Fisher鉴别分析(KFD)算法已成为非线性特征抽取的最有效方法之一。但是先前的基于核Fisher鉴别分析算法的特征抽取过程都是基于2值分类问题而言的。如何从重叠(离群)样本中抽取有效的分类特征没有得到有效的解决。本文在结合模糊集理论的基础上,利用模糊隶属度函数的概念,在特征提取过程中融入了样本的分布信息,提出了一种新的核Fisher鉴别分析方法——模糊核鉴别分析算法。在ORL人脸数据库上的实验结果验证了该算法的有效性。  相似文献   

12.
This paper presents a shift invariant scene classification method based on local autocorrelation of similarities with subspaces. Although conventional scene classification methods used bag-of-visual words for scene classification, superior accuracy of kernel principal component analysis (KPCA) of visual words to bag-of-visual words was reported. Here we also use KPCA of visual words to extract rich information for classification. In the original KPCA of visual words, all local parts mapped into subspace were integrated by summation to be robust to the order, the number, and the shift of local parts. This approach discarded the effective properties for scene classification such as the relation with neighboring regions. To use them, we use (normalized) local autocorrelation (LAC) feature of the similarities with subspaces (outputs of KPCA of visual words). The feature has both the relation with neighboring regions and the robustness to shift of objects in scenes. The proposed method is compared with conventional scene classification methods using the same database and protocol, and we demonstrate the effectiveness of the proposed method.  相似文献   

13.
Classic kernel principal component analysis (KPCA) is less computationally efficient when extracting features from large data sets. In this paper, we propose an algorithm, that is, efficient KPCA (EKPCA), that enhances the computational efficiency of KPCA by using a linear combination of a small portion of training samples, referred to as basic patterns, to approximately express the KPCA feature extractor, that is, the eigenvector of the covariance matrix in the feature extraction. We show that the feature correlation (i.e., the correlation between different feature components) can be evaluated by the cosine distance between the kernel vectors, which are the column vectors in the kernel matrix. The proposed algorithm can be easily implemented. It first uses feature correlation evaluation to determine the basic patterns and then uses these to reconstruct the KPCA model, perform feature extraction, and classify the test samples. Since there are usually many fewer basic patterns than training samples, EKPCA feature extraction is much more computationally efficient than that of KPCA. Experimental results on several benchmark data sets show that EKPCA is much faster than KPCA while achieving similar classification performance.  相似文献   

14.
For classifying large data sets, we propose a discriminant kernel that introduces a nonlinear mapping from the joint space of input data and output label to a discriminant space. Our method differs from traditional ones, which correspond to map nonlinearly from the input space to a feature space. The induced distance of our discriminant kernel is Eu- clidean and Fisher separable, as it is defined based on distance vectors of the feature space to distance vectors on the discriminant space. Unlike the support vector machines or the kernel Fisher discriminant analysis, the classifier does not need to solve a quadric program- ming problem or eigen-decomposition problems. Therefore, it is especially appropriate to the problems of processing large data sets. The classifier can be applied to face recognition, shape comparison and image classification benchmark data sets. The method is significantly faster than other methods and yet it can deliver comparable classification accuracy.  相似文献   

15.
全局与局部判别信息融合的转子故障数据集降维方法研究   总被引:1,自引:0,他引:1  
针对传统的数据降维方法无法兼顾保持全局特征信息与局部判别信息的问题,提出一种核主元分析(Kernel principal component analysis,KPCA)和正交化局部敏感判别分析(Orthogonal locality sensitive discriminant analysis,OLSDA)相结合的转子故障数据集降维方法.该方法首先利用KPCA算法有效降低数据集的相关性、消除冗余属性,由此实现了最大程度地保留原始数据全局非线性信息的作用;然后利用OLSDA算法充分挖掘出数据的局部流形结构信息,达到了提取出具有高判别力低维本质特征的目的.上述方法的特点是通过同时进行的正交化处理可避免局部子空间结构发生失真,采用三维图直观显示出低维结果,以低维特征子集输入最近邻分类器(K-nearest neighbor,KNN)的识别率和聚类分析之类间距Sb、类内距Sw作为衡量降维效果的指标.实验表明该方法能够全面地提取出全局与局部判别信息,使故障分类更清晰,相应地识别准确率得到了明显提升.该研究可为解决高维和非线性机械故障数据集的可视化与分类问题,提供理论参考依据.  相似文献   

16.
张成  李娜  李元  逄玉俊 《计算机应用》2014,34(10):2895-2898
针对核主元分析(KPCA)中高斯核参数β的经验选取问题,提出了核主元分析的核参数判别选择方法。依据训练样本的类标签计算类内、类间核窗宽,在以上核窗宽中经判别选择方法确定核参数。根据判别选择核参数所确定的核矩阵,能够准确描述训练空间的结构特征。用主成分分析(PCA)对特征空间进行分解,提取主成分以实现降维和特征提取。判别核窗宽方法在分类密集区域选择较小窗宽,在分类稀疏区域选择较大窗宽。将判别核主成分分析(Dis-KPCA)应用到数据模拟实例和田纳西过程(TEP),通过与KPCA、PCA方法比较,实验结果表明,Dis-KPCA方法有效地对样本数据降维且将三个类别数据100%分开,因此,所提方法的降维精度更高。  相似文献   

17.
传统的PCA和LDA算法受限于“小样本问题”,且对像素的高阶相关性不敏感。论文将核函数方法与规范化LDA相结合,将原图像空间通过非线性映射变换到高维特征空间,并借助于“核技巧”在新的空间中应用鉴别分析方法。通过对ORL人脸库的大量实验表明,该方法在特征提取方面优于PCA,KPCA,LDA等其他方法,在简化分类器的同时,也可以获得高识别率。  相似文献   

18.
Small sample size and high computational complexity are two major problems encountered when traditional kernel discriminant analysis methods are applied to high-dimensional pattern classification tasks such as face recognition. In this paper, we introduce a new kernel discriminant learning method, which is able to effectively address the two problems by using regularization and subspace decomposition techniques. Experiments performed on real face databases indicate that the proposed method outperforms, in terms of classification accuracy, existing kernel methods, such as kernel principal component analysis and kernel linear discriminant analysis, at a significantly reduced computational cost.  相似文献   

19.
穿戴式跌倒检测中老年人特征属性过多会造成维数灾难,影响后续跌倒检测精度。针对此问题,首先采用时域分析法提取初始特征向量集,然后用提出的改进核主成分分析算法(IKPCA)对特征向量进行降维,从而获得优质的特征向量集,使得后续的分类具有更好的效果。IKPCA算法首先利用I-RELIEF算法对初始特征向量集进行特征选择,然后计算跌倒特征向量的信息度量和相似度度量,最后根据跌倒特征向量的相似度度量剔除无效的跌倒特征向量。IKPCA算法不但保持核主成分分析算法(KPCA)较好的降维能力,而且扩充了较好的分类能力。利用真实的数据集进行实验,对比分析表明,相比其他算法,IKPCA算法能够得到更优质的特征向量数据集。  相似文献   

20.
一种快速核特征提取方法及其应用   总被引:1,自引:1,他引:0       下载免费PDF全文
许亮  张小波 《计算机工程》2009,35(24):26-28
针对核主成分分析方法(KPCA)存在大样本集的核矩阵K计算困难问题,提出一种基于分块特征向量选择的快速核主成分分析方法。采用分块特征向量选择方法提取样本子集,用样本子集建立KPCA模型。将该方法应用于某化工过程的特征信息提取,并与全体样本的KPCA相比较。实验结果表明,两者特征提取的有效性相当,但新方法在建模和特征提取过程所耗费的时间较少。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号