首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
It is widely recognized that whether the selected kernel matches the data determines the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.  相似文献   

2.
Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher discriminant analysis (FDA). FDA gives the Bayes optimal classification axis if the data distribution of each class in the feature space is a gaussian with a shared covariance structure. Based on this fact, an MKL framework based on the notion of gaussianity is proposed. As a concrete implementation, an empirical characteristic function is adopted to measure gaussianity in the feature space associated with a convex combination of kernel functions, and two MKL algorithms are derived. From experimental results on some data sets, we show that the proposed kernel learning followed by FDA offers strong classification power.  相似文献   

3.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

4.
In this paper, we propose a general regularization framework for multiclass classification based on discriminant functions. Since the objective function in the primal optimization problem of this framework is always not differentiable, the optimal solution cannot be obtained directly. With the aid of the deterministic annealing approach, a differentiable objective function is derived subject to a constraint on the randomness of the solution. The problem can be approximated by solving a sequence of differentiable optimization problems, and such approximation converges to the original problem asymptotically. Based on this approach, class-conditional posterior probabilities can be calculated directly without assuming the underlying probabilistic model. We also notice that there is a connection between our approach and some existing statistical models, such as Fisher discriminant analysis and logistic regression.  相似文献   

5.

In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L2,1-norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity.

  相似文献   

6.
The advantage of a kernel method often depends critically on a proper choice of the kernel function. A promising approach is to learn the kernel from data automatically. In this paper, we propose a novel method for learning the kernel matrix based on maximizing a class separability criterion that is similar to those used by linear discriminant analysis (LDA) and kernel Fisher discriminant (KFD). It is interesting to note that optimizing this criterion function does not require inverting the possibly singular within-class scatter matrix which is a computational problem encountered by many LDA and KFD methods. We have conducted experiments on both synthetic data and real-world data from UCI and FERET, showing that our method consistently outperforms some previous kernel learning methods.  相似文献   

7.
A reformative kernel algorithm, which can deal with two-class problems as well as those with more than two classes, on Fisher discriminant analysis is proposed. In the novel algorithm the supposition that in feature space discriminant vector can be approximated by some linear combination of a part of training samples, called “significant nodes”, is made. If the “significant nodes” are found out, the novel algorithm on kernel Fisher discriminant analysis will be superior to the naive one in classification efficiency. In this paper, a recursive algorithm for selecting “significant nodes”, is developed in detail. Experiments show that the novel algorithm is effective and much efficient in classifying.  相似文献   

8.
This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a two-phase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Based on this framework, the authors propose a complete kernel Fisher discriminant analysis (CKFD) algorithm. CKFD can be used to carry out discriminant analysis in "double discriminant subspaces." The fact that, it can make full use of two kinds of discriminant information, regular and irregular, makes CKFD a more powerful discriminator. The proposed algorithm was tested and evaluated using the FERET face database and the CENPARMI handwritten numeral database. The experimental results show that CKFD outperforms other KFD algorithms.  相似文献   

9.
Kernel functions are used to estimate the probability density functions of variables for nonparametric discriminant analysis. In connection with stepwise variable identification a stepwise maximum likelihood estimation procedure for the estimation of smoothing factors of the kernel functions is developed. This procedure allows a step-by-step estimation of smoothing factors for every variable which is considered to be added to the model or which is examined to substitute a variable in a model. Different criteria for model evaluation in stepwise discriminant analysis are discussed. Beside criteria, like distance and dependence functions and the error and nonerror rate, a criterion which considers the ratio of probability densities of different classes at point x is proposed for stepwise variable identification. An application of the procedures described in this study to a medical decision problem shows the importance of stepwise parameter estimation of kernel functions for nonparametric discriminant analysis and the role of different model evaluation criteria for the selection of the best subset of variables.  相似文献   

10.
The Nadaraya–Watson estimator, also known as kernel regression, is a density-based regression technique. It weights output values with the relative densities in input space. The density is measured with kernel functions that depend on bandwidth parameters. In this work we present an evolutionary bandwidth optimizer for kernel regression. The approach is based on a robust loss function, leave-one-out cross-validation, and the CMSA-ES as optimization engine. A variant with local parameterized Nadaraya–Watson models enhances the approach, and allows the adaptation of the model to local data space characteristics. The unsupervised counterpart of kernel regression is an approach to learn principal manifolds. The learning problem of unsupervised kernel regression (UKR) is based on optimizing the latent variables, which is a multimodal problem with many local optima. We propose an evolutionary framework for optimization of UKR based on scaling of initial local linear embedding solutions, and minimization of the cross-validation error. Both methods are analyzed experimentally.  相似文献   

11.
A new discriminative kernel from probabilistic models   总被引:3,自引:0,他引:3  
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.  相似文献   

12.
王昕  刘颖  范九伦 《计算机科学》2012,39(9):262-265
核Fisher判别分析法是一种有效的非线性判别分析法。传统的核Fisher判别分析仅选用单个核函数,在人脸特征提取方面仍显不足。鉴于此,提出多核Fisher判别分析法,即通过将多个单核Fisher判别得到的投影进行加权组合得到加权投影,以加权投影为依据进行特征提取和分类。实验表明,在进行人脸特征提取和分类时,多核Fisher判别分析法优于单核Fisher判别分析法。  相似文献   

13.
张成  李娜  李元  逄玉俊 《计算机应用》2014,34(10):2895-2898
针对核主元分析(KPCA)中高斯核参数β的经验选取问题,提出了核主元分析的核参数判别选择方法。依据训练样本的类标签计算类内、类间核窗宽,在以上核窗宽中经判别选择方法确定核参数。根据判别选择核参数所确定的核矩阵,能够准确描述训练空间的结构特征。用主成分分析(PCA)对特征空间进行分解,提取主成分以实现降维和特征提取。判别核窗宽方法在分类密集区域选择较小窗宽,在分类稀疏区域选择较大窗宽。将判别核主成分分析(Dis-KPCA)应用到数据模拟实例和田纳西过程(TEP),通过与KPCA、PCA方法比较,实验结果表明,Dis-KPCA方法有效地对样本数据降维且将三个类别数据100%分开,因此,所提方法的降维精度更高。  相似文献   

14.
Feature extraction is among the most important problems in face recognition systems. In this paper, we propose an enhanced kernel discriminant analysis (KDA) algorithm called kernel fractional-step discriminant analysis (KFDA) for nonlinear feature extraction and dimensionality reduction. Not only can this new algorithm, like other kernel methods, deal with nonlinearity required for many face recognition tasks, it can also outperform traditional KDA algorithms in resisting the adverse effects due to outlier classes. Moreover, to further strengthen the overall performance of KDA algorithms for face recognition, we propose two new kernel functions: cosine fractional-power polynomial kernel and non-normal Gaussian RBF kernel. We perform extensive comparative studies based on the YaleB and FERET face databases. Experimental results show that our KFDA algorithm outperforms traditional kernel principal component analysis (KPCA) and KDA algorithms. Moreover, further improvement can be obtained when the two new kernel functions are used.  相似文献   

15.
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence framework is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model parameters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.  相似文献   

16.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

17.
A reformative kernel Fisher discriminant method is proposed, which is directly derived from the naive kernel Fisher discriminant analysis with superiority in classification efficiency. In the novel method only a part of training patterns, called “significant nodes”, are necessary to be adopted in classifying one test pattern. A recursive algorithm for selecting “significant nodes”, which is the key of the novel method, is presented in detail. The experiment on benchmarks shows that the novel method is effective and much efficient in classifying.  相似文献   

18.
提出了一种基于低密度分割几何距离的半监督KFDA(kernelFisherdiscriminantanalysis)算法(semisupervised KFDA based on low density separation geometry distance,简称Semi GKFDA).该算法以低密度分割几何距离作为相似性度量,通过大量无标签样本,提高KFDA算法的泛化能力.首先,利用核函数将原始空间样本数据映射到高维特征空间中;然后,通过有标签样本和无标签样本构建低密度分割几何距离测度上的内蕴结构一致性假设,使其作为正则化项整合到费舍尔判别分析的目标函数中;最后,通过求解最小化目标函数获得最优投影矩阵.人工数据集和UCI数据集上的实验表明,该算法与KFDA及其改进算法相比,在分类性能上有显著提高.此外,将该算法与其他算法应用到人脸识别问题中进行对比,实验结果表明,该算法具有更高的识别精度.  相似文献   

19.
陈佳佩  卢元元 《计算机工程》2011,37(21):179-181
传统Fisher判别方法存在小样本问题,而逆Fisher判别方法的识别率较低.为此,提出一种基于核函数的逆Fisher人脸识别方法,在逆Fisher准则的基础上引入核函数映射,选取合适的核函数在高维空间里提取人脸图像特征.实验结果表明,该方法能保持逆Fisher判别的鲁棒性,人脸识别率较高.  相似文献   

20.
核Fisher判别分析在多聚焦图像融合中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于核Fisher判别分析与图像块分割的多聚焦图像融合方法。该方法首先将源图像进行块分割,计算反映图像块聚焦程度的清晰度特征;再将源图像的部分区域作为训练样本,获得训练后的核Fisher判别分析参数;然后利用已知的核Fisher判别分析获得初步融合图像;最后对位于源图像清晰与模糊区域交界处的源图像块利用冗余小波变换进行处理后,得到最终融合图像。实验结果表明,该方法的图像融合效果优于常用图像融合方法,可在有效提高图像融合质量与减少计算量之间获得较好的折衷。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号