首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization.  相似文献   

2.
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.  相似文献   

3.
一种核最大散度差判别分析人脸识别方法   总被引:1,自引:1,他引:0  
提出一种有效的非线性子空间学习方法--核最大散度差判别分析(KMSD),并将其用于人脸识别.核最大散度差判别分析首先把输入空间的样本非线性映射到特征空间,然后通过核方法的技巧,采用最大散度差判别分析(MSD)方法在特征空间里求解.在Yale和ORL人脸数据库上的实验结果表明,提出的核最大散度差判别分析方法用于人脸识别具有较高的识别率.  相似文献   

4.
We propose Kernel Self-optimized Locality Preserving Discriminant Analysis (KSLPDA) for feature extraction and recognition. The procedure of KSLPDA is divided into two stages, i.e., one is to solve the optimal expansion of the data-dependent kernel with the proposed kernel self-optimization method, and the second is to seek the optimal projection matrix for dimensionality reduction. Since the optimal parameters of data-dependent kernel are achieved automatically through solving the constraint optimization equation, based on maximum margin criterion and Fisher criterion in the empirical feature space, KSLPDA works well on feature extraction for classification. The comparative experiments show that KSLPDA outperforms PCA, LDA, LPP, supervised LPP and kernel supervised LPP.  相似文献   

5.
一种用于人脸识别的非线性鉴别特征融合方法   总被引:2,自引:0,他引:2  
最近,在人脸等图像识别领域,用于抽取非线性特征的核方法如核Fisher鉴别分析(KFDA)已经取得成功并得到了广泛应用,但现有的核方法都存在这样的问题,即构造特征空间中的核矩阵所耗费的计算量非常大.而且,抽取得到的单类特征往往不能获得到令人满意的识别结果.提出了一种用于人脸识别的非线性鉴别特征融合方法,即首先利用小波变换和奇异值分解对原始输入样本进行降雏变换,抽取同一样本空间的两类特征,然后利用复向量将这两类特征组合在一起,构成一复特征向量空间,最后在该空间中进行最优鉴别特征抽取.在ORL标准人脸库上的试验结果表明所提方法不仅在识别性能上优于现有的核Fisher鉴别分析方法,而且,在ORL人脸库上的特征抽取速度提高了近8倍.  相似文献   

6.
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.  相似文献   

7.
Generalizations ofnonnegative matrix factorization (NMF) in kernel feature space, such as projected gradient kernel NMF (PGKNMF) and polynomial Kernel NMF (PNMF), have been developed for face and facial expression recognition recently. However, these existing kernel NMF approaches cannot guarantee the nonnegativity of bases in kernel feature space and thus are essentially semi-NMF methods. In this paper, we show that nonlinear semi-NMF cannot extract the localized components which offer important information in object recognition. Therefore, nonlinear NMF rather than semi-NMF is needed to be developed for extracting localized component as well as learning the nonlinear structure. In order to address the nonlinear problem of NMF and the semi-nonnegative problem of the existing kernel NMF methods, we develop the nonlinear NMF based on a self-constructed Mercer kernel which preserves the nonnegative constraints on both bases and coefficients in kernel feature space. Experimental results in face and expressing recognition show that the proposed approach outperforms the existing state-of-the-art kernel methods, such as KPCA, GDA, PNMF and PGKNMF.  相似文献   

8.
基于条件正定核的SVM人脸识别   总被引:2,自引:0,他引:2       下载免费PDF全文
为提高人脸识别分类器的能力,采用了一种改进的可用于核学习方法的核函数—条件正定核函数。条件正定核函数一般不满足Mercer条件,但可以在核空间中计算样本间的距离,突出样本间的特征差异。对ORL、YALE、ESSEX三个标准人脸数据库进行仿真实验,结果表明基于条件正定核的SVM人脸识别算法在训练时间没有降低的情况下,与其他核函数法相比识别率有较大提高,并且当类别数增加时算法表现出较强的鲁棒性。  相似文献   

9.
本文提出了一种新的非线性特征抽取方法——基于散度差准则的隐空间特征抽取方法。该方法的主要思想就是首先利用一核函数将原始输入空间非线性变换到隐空间,然后,在该隐空间中,利用类间离散度与类内离散度之差作为鉴别准则进行特征抽取。与现有的核特征抽取方法不同,该方法不需要核函数满足Mercer定理,从而增加了核函数的选择范围。更为重要的是,由于采用了散度差作为鉴别准则,从根本上避免了传统的Fisher线性鉴别分析所遇到的小样本问题。在ORL人脸数据库和AR标准人脸库上的试验结果验证了本文方法的有效性。  相似文献   

10.
支持向量分类时,由于样本分布的不均匀性,单宽度的高斯核会在空间的稠密区域产生过学习现象,在稀疏区域产生欠学习现象,即存在局部风险.针对于此,构造了一个全局性次核来降低高斯核产生的局部风险.形成的混合核称为主次核.利用幂级数构造性地给出并证明了主次核的正定性条件,进一步提出了基于遗传算法的两阶段模型选择算法来优化主次核的参数.实验验证了主次核和模型选择法的优越性.  相似文献   

11.
Optimizing the kernel in the empirical feature space   总被引:17,自引:0,他引:17  
In this paper, we present a method of kernel optimization by maximizing a measure of class separability in the empirical feature space, an Euclidean space in which the training data are embedded in such a way that the geometrical structure of the data in the feature space is preserved. Employing a data-dependent kernel, we derive an effective kernel optimization algorithm that maximizes the class separability of the data in the empirical feature space. It is shown that there exists a close relationship between the class separability measure introduced here and the alignment measure defined recently by Cristianini. Extensive simulations are carried out which show that the optimized kernel is more adaptive to the input data, and leads to a substantial, sometimes significant, improvement in the performance of various data classification algorithms.  相似文献   

12.
Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher discriminant analysis (FDA). FDA gives the Bayes optimal classification axis if the data distribution of each class in the feature space is a gaussian with a shared covariance structure. Based on this fact, an MKL framework based on the notion of gaussianity is proposed. As a concrete implementation, an empirical characteristic function is adopted to measure gaussianity in the feature space associated with a convex combination of kernel functions, and two MKL algorithms are derived. From experimental results on some data sets, we show that the proposed kernel learning followed by FDA offers strong classification power.  相似文献   

13.
This paper presents a novel dimension reduction algorithm for kernel based classification. In the feature space, the proposed algorithm maximizes the ratio of the squared between-class distance and the sum of the within-class variances of the training samples for a given reduced dimension. This algorithm has lower complexity than the recently reported kernel dimension reduction (KDR) for supervised learning. We conducted several simulations with large training datasets, which demonstrate that the proposed algorithm has similar performance or is marginally better compared with KDR whilst having the advantage of computational efficiency. Further, we applied the proposed dimension reduction algorithm to face recognition in which the number of training samples is very small. This proposed face recognition approach based on the new algorithm outperforms the eigenface approach based on the principal component analysis (PCA), when the training data is complete, that is, representative of the whole dataset.  相似文献   

14.
尽管基于Fisher准则的线性鉴别分析被公认为特征抽取的有效方法之一,并被成功地用于人脸识别,但是由于光照变化、人脸表情和姿势变化,实际上的人脸图像分布是十分复杂的,因此,抽取非线性鉴别特征显得十分必要。为了能利用非线性鉴别特征进行人脸识别,提出了一种基于核的子空间鉴别分析方法。该方法首先利用核函数技术将原始样本隐式地映射到高维(甚至无穷维)特征空间;然后在高维特征空间里,利用再生核理论来建立基于广义Fisher准则的两个等价模型;最后利用正交补空间方法求得最优鉴别矢量来进行人脸识别。在ORL和NUST603两个人脸数据库上,对该方法进行了鉴别性能实验,得到了识别率分别为94%和99.58%的实验结果,这表明该方法与核组合方法的识别结果相当,且明显优于KPCA和Kernel fisherfaces方法的识别结果。  相似文献   

15.
基于交叉覆盖算法的改进算法——核平移覆盖算法   总被引:2,自引:2,他引:2  
文中对前向神经网络交叉覆盖算法进行了分析,并在此基础上引入统计学习理论中的核函数,提出了两者结合的方法———核平移覆盖算法(简称KMCA)。KMCA通过Mercer核,将输入空间的样本映射到高维特征空间,然后先覆盖、后平移,以使覆盖领域局部最优,实现在核空间中分类识别。实验的结果证明了KMCA的可行性和有效性。  相似文献   

16.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

17.

In this paper a novel multikernel deterministic extreme learning machine (ELM) and its variants are developed for classification of non-linear problems. Over a decade ELM is proved to be efficacious learning algorithms, but due to the non-deterministic and single kernel dependent feature mapping proprietary, it cannot be efficiently applied to real time classification problems that require invariant output solution. We address this problem by analytically calculation of input and hidden layer parameters for achieving the deterministic solution and exploiting the data fusion proficiency of multiple kernel learning. This investigation originates a novel deterministic ELM with single layer architecture in which kernel function is aggregation of linear combination of disparate base kernels. The weight of kernels depends upon perspicacity of problem and is empirically calculated. To further enhance the performance we utilize the capabilities of fuzzy set to find the pixel-wise coalition of face images with different classes. This handles the uncertainty involved in face recognition under varying environment condition. The pixel-wise membership value extracts the unseen information from images up to significant extent. The validity of the proposed approach is tested extensively on diverse set of face databases: databases with and without illumination variations and discrete types of kernels. The proposed algorithms achieve 100% recognition rate for Yale database, when seven and eight images per identity are considered for training. Also, the superior recognition rate is achieved for AT & T, Georgia Tech and AR databases, when compared with contemporary methods that prove the efficacy of proposed approaches in uncontrolled conditions significantly.

  相似文献   

18.
针对人脸识别应用,提出一种基于学习且具有鉴别能力的核图像微分滤波器。首先,区别于现有滤波器的手工设计方法,该滤波器利用训练集动态学习获得,通过在学习过程中融入线性判别分析(LDA)思想,可在增加滤波后图像类内相似度的同时减小类间相似度;其次,在线性滤波分类器的基础上进一步引入二阶微分信息,并结合核方法在高维空间下进行滤波器学习,使得图像中的细节和非线性信息可以得到更好的利用并获得更具鉴别力的特征描述。AR和ORL人脸库上的多组对比实验结果表明,与线性可学习图像滤波器IFL、不考虑微分信息的核图像滤波器以及只考虑一阶微分信息的核图像滤波器进行比较,所提算法可有效提高识别性能。  相似文献   

19.
UDP has been successfully applied in many fields, finding a subspace that maximizes the ratio of the nonlocal scatter to the local scatter. But UDP can not represent the nonlinear space well because it is a linear method in nature. Kernel methods can otherwise discover the nonlinear structure of the images. To improve the performance of UDP, kernel UDP (a nonlinear vision of UDP) is proposed for face feature extraction and face recognition via kernel tricks in this paper. We formulate the kernel UDP theory and develop a two-stage method to extract kernel UDP features: namely weighted Kernel PCA plus UDP. The experimental results on the FERET and ORL databases show that the proposed kernel UDP is effective.  相似文献   

20.
核函数方法可挖掘出高精度快速印刷品图像间的非线性分布规律,而挖掘能力由所选择的核函数及其参数来决定。这两者的学习与选择同样是核函数理论继续发展与实际应用需要迫切解决的问题。针对印刷品智能检测这一特定背景,提出了一种新的基于优化问题的从具有动态参数的函数空间中学习核函数及参数的方法,以此来使核函数方法达到最优性能。与传统的计算方法不同之处在于其核函数空间中的核参数是连续变化的,这使学习的范围得到一个维度上的扩展。实验结果显示,结合理论分析的迭代算法仅需要10次迭代便可得到统计最优的核函数及参数,利用学习到的核函数计算的复原误差是统计最小的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号