首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Kernel methods and deep learning are two of the most currently remarkable machine learning techniques that have achieved great success in many applications. Kernel methods are powerful tools to capture nonlinear patterns behind data. They implicitly learn high (even infinite) dimensional nonlinear features in the reproducing kernel Hilbert space (RKHS) while making the computation tractable by leveraging the kernel trick. It is commonly agreed that the success of kernel methods is very much dependent on the choice of kernel. Multiple kernel learning (MKL) is one possible scheme that performs kernel combination and selection for a variety of learning tasks, such as classification, clustering, and dimensionality reduction. Deep learning models project input data through several layers of nonlinearity and learn different levels of abstraction. The composition of multiple layers of nonlinear functions can approximate a rich set of naturally occurring input-output dependencies. To bridge kernel methods and deep learning, deep kernel learning has been proven to be an effective method to learn complex feature representations by combining the nonparametric flexibility of kernel methods with the structural properties of deep learning. This article presents a comprehensive overview of the state-of-the-art approaches that bridge the MKL and deep learning techniques. Specifically, we systematically review the typical hybrid models, training techniques, and their theoretical and practical benefits, followed by remaining challenges and future directions. We hope that our perspectives and discussions serve as valuable references for new practitioners and theoreticians seeking to innovate in the applications of the approaches incorporating the advantages of both paradigms and exploring new synergies.  相似文献   

2.
多核学习(MKL)方法在分类及回归任务中均取得了优于单核学习方法的性能,但传统的MKL方法均用于处理两类或多类分类问题.为了使MKL方法适用于处理单类分类(OCC)问题,提出了基于中心核对齐(CKA)的单类支持向量机(OCSVM).首先利用CKA计算每个核矩阵的权重,然后将所得权重用作线性组合系数,进而将不同类型的核函...  相似文献   

3.
核方法是解决非线性模式分析问题的一种有效方法,是当前机器学习领域的一个研究热点.核函数是影响核方法性能的关键因素,以支持向量机作为核函数的载体,从核函数的构造、核函数中参数的选择、多核学习3个角度对核函数的选择的研究现状及其进展情况进行了系统地概述,并指出根据特定应用领域选择核函数、设计有效的核函数度量标准和拓宽核函数选择的研究范围是其中3个值得进一步研究的方向.  相似文献   

4.
Kernel discriminant analysis (KDA) is one of the state-of-the-art kernel-based methods for pattern classification and dimensionality reduction. It performs linear discriminant analysis in the feature space via kernel function. However, the performance of KDA greatly depends on the selection of the optimal kernel for the learning task of interest. In this paper, we propose a novel algorithm termed as elastic multiple kernel discriminant analysis (EMKDA) by using hybrid regularization for automatically learning kernels over a linear combination of pre-specified kernel functions. EMKDA makes use of a mixing norm regularization function to compromise the sparsity and non-sparsity of the kernel weights. A semi-infinite program based algorithm is then proposed to solve EMKDA. Extensive experiments on synthetic datasets, UCI benchmark datasets, digit and terrain database are conducted to show the effectiveness of the proposed methods.  相似文献   

5.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

6.
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.  相似文献   

7.
王裴岩  蔡东风 《软件学报》2015,26(11):2856-2868
核方法是一类应用较为广泛的机器学习算法,已被应用于分类、聚类、回归和特征选择等方面.核函数的选择与参数优化一直是影响核方法效果的核心问题,从而推动了核度量标准,特别是普适性核度量标准的研究.对应用最为广泛的5种普适性核度量标准进行了分析与比较研究,包括KTA,EKTA,CKTA,FSM和KCSM.发现上述5种普适性度量标准的度量内容为特征空间中线性假设的平均间隔,与支持向量机最大化最小间隔的优化标准存在偏差.然后,使用模拟数据分析了上述标准的类别分布敏感性、线性平移敏感性、异方差数据敏感性,发现上述标准仅是核度量的充分非必要条件,好的核函数可能获得较低的度量值.最后,在9个UCI数据集和20Newsgroups数据集上比较了上述标准的度量效果,发现CKTA是度量效果最好的普适性核度量标准.  相似文献   

8.
为解决高维数据在分类时造成的“维数灾难”问题,提出一种新的将核函数与稀疏学习相结合的属性选择算法。具体地,首先将每一维属性利用核函数映射到核空间,在此高维核空间上执行线性属性选择,从而实现低维空间上的非线性属性选择;其次,对映射到核空间上的属性进行稀疏重构,得到原始数据集的一种稀疏表达方式;接着利用L 1范数构建属性评分选择机制,选出最优属性子集;最后,将属性选择后的数据用于分类实验。在公开数据集上的实验结果表明,该算法能够较好地实现属性选择,与对比算法相比分类准确率提高了约3%。  相似文献   

9.
Kernel optimization-based discriminant analysis for face recognition   总被引:2,自引:2,他引:0  
The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization.  相似文献   

10.
The selection of kernel function and its parameter influences the performance of kernel learning machine. The difference geometry structure of the empirical feature space is achieved under the different kernel and its parameters. The traditional changing only the kernel parameters method will not change the data distribution in the empirical feature space, which is not feasible to improve the performance of kernel learning. This paper applies kernel optimization to enhance the performance of kernel discriminant analysis and proposes a so-called Kernel Optimization-based Discriminant Analysis (KODA) for face recognition. The procedure of KODA consisted of two steps: optimizing kernel and projecting. KODA automatically adjusts the parameters of kernel according to the input samples and performance on feature extraction is improved for face recognition. Simulations on Yale and ORL face databases are demonstrated the feasibility of enhancing KDA with kernel optimization.  相似文献   

11.
The traditional multiple kernel learning (MKL) is usually based on implicit kernel mapping and adopts a certain combination of kernels instead of a single kernel. MKL has been demonstrated to have a significant advantage to the single-kernel learning. Although MKL sets different weights to different kernels, the weights are not changed over the whole input space. This weight setting might not been fit for those data with some underlying local distributions. In order to solve this problem, Gönen and Alpayd?n (2008) introduced a localizing gating model into the traditional MKL framework so as to assign different weights to a kernel in different regions of the input space. In this paper, we also integrate the localizing gating model into our previous work named MultiK-MHKS that is an effective multiple empirical kernel learning. In doing so, we can get multiple localized empirical kernel learning named MLEKL. Our contribution is that we first establish a localized formulation in the empirical kernel learning framework. The experimental results on benchmark data sets validate the effectiveness of the proposed MLEKL.  相似文献   

12.
孙辉  许洁萍  刘彬彬 《计算机应用》2015,35(6):1753-1756
针对不同特征向量下选择最优核函数的学习方法问题,将多核学习支持向量机(MK-SVM)应用于音乐流派自动分类中,提出了将最优核函数进行加权组合构成合成核函数进行流派分类的方法。多核分类学习能够针对不同的声学特征采用不同的最优核函数,并通过学习得到各个核函数在分类中的权重,从而明确各声学特征在流派分类中的权重,为音乐流派分类中特征向量的分析和选择提供了一个清晰、明确的结果。在ISMIR 2011竞赛数据集上验证了提出的基于多核学习支持向量机(MKL-SVM)的分类方法,并与传统的基于单核支持向量机的方法进行了比较分析。实验结果表明基于MKL-SVM的音乐流派自动分类准确率比传统单核支持向量机的分类准确率提高了6.58%,且该方法与传统的特征选择结果比较,更清楚地解释了所选择的特征向量对流派分类的影响大小,通过选择影响较大的特征组合进行分类,分类结果也有了明显的提升。  相似文献   

13.
For learning-based tasks such as image classification, the feature dimension is usually very high. The learning is afflicted by the curse of dimensionality as the search space grows exponentially with the dimension. Discriminant-EM (DEM) proposed a framework by applying self-supervised learning in a discriminating subspace. This paper extends the linear DEM to a nonlinear kernel algorithm, Kernel DEM (KDEM), and evaluates KDEM extensively on benchmark image databases and synthetic data. Various comparisons with other state-of-the-art learning techniques are investigated for several tasks of image classification. Extensive results show the effectiveness of our approach.  相似文献   

14.
张乐园  李佳烨  李鹏清 《计算机应用》2018,38(12):3444-3449
针对高维的数据中往往存在非线性、低秩形式和属性冗余等问题,提出一种基于核函数的属性自表达无监督属性选择算法——低秩约束的非线性属性选择算法(LRNFS)。首先,将每一维的属性映射到高维的核空间上,通过核空间上的线性属性选择去实现低维空间上的非线性属性选择;然后,对自表达形式引入偏差项并对系数矩阵进行低秩与稀疏处理;最后,引入核矩阵的系数向量的稀疏正则化因子来实现属性选择。所提算法中用核矩阵来体现其非线性关系,低秩考虑数据的全局信息进行子空间学习,自表达形式确定属性的重要程度。实验结果表明,相比于基于重新调整的线性平方回归(RLSR)半监督特征选择算法,所提算法进行属性选择之后作分类的准确率提升了2.34%。所提算法解决了数据在低维特征空间上线性不可分的问题,提升了属性选择的准确率。  相似文献   

15.
Zhang T 《Neural computation》2005,17(9):2077-2098
Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the "curse-of-dimensionality" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.  相似文献   

16.
Zhong  Zhi  Chen  Long 《Multimedia Tools and Applications》2019,78(23):33339-33356

For many machine learning and data mining tasks in the information explosion environment, one is often confronted with very high dimensional heterogeneous data. Demands for new methods to select discrimination and valuable features that are beneficial to classification and cluster have increased. In this paper, we propose a novel feature selection method to jointly map original data from input space to kernel space and conduct both subspace learning (via locality preserving projection) and feature selection (via a sparsity constraint). Specifically, the nonlinear relationship between data is explored adequately through mapping data from original low-dimensional space to kernel space. Meanwhile, the subspace learning technique is leveraged to preserve available information of local structure in ambient space. Last, by restricting the sparsity of the coefficient matrix, the weight of some features is 0. As a result, we eliminate redundant and irrelevant features and thus make our method select informative and distinguishing features. By comparing our proposed method with some state-of-the-art methods, the experimental results demonstrate that the proposed method outperformed the comparisons in terms of clustering task.

  相似文献   

17.
Multiple Kernel Learning (MKL) is a popular generalization of kernel methods which allows the practitioner to optimize over convex combinations of kernels. We observe that many recent MKL solutions can be cast in the framework of oracle based optimization, and show that they vary in terms of query point generation. The popularity of such methods is because the oracle can fortuitously be implemented as a support vector machine. Motivated by the success of centering approaches in interior point methods, we propose a new approach to optimize the MKL objective based on the analytic center cutting plane method (accpm). Our experimental results show that accpm outperforms state of the art in terms of rate of convergence and robustness. Further analysis sheds some light as to why MKL may not always improve classification accuracy over naive solutions.  相似文献   

18.
张成  李娜  李元  逄玉俊 《计算机应用》2014,34(10):2895-2898
针对核主元分析(KPCA)中高斯核参数β的经验选取问题,提出了核主元分析的核参数判别选择方法。依据训练样本的类标签计算类内、类间核窗宽,在以上核窗宽中经判别选择方法确定核参数。根据判别选择核参数所确定的核矩阵,能够准确描述训练空间的结构特征。用主成分分析(PCA)对特征空间进行分解,提取主成分以实现降维和特征提取。判别核窗宽方法在分类密集区域选择较小窗宽,在分类稀疏区域选择较大窗宽。将判别核主成分分析(Dis-KPCA)应用到数据模拟实例和田纳西过程(TEP),通过与KPCA、PCA方法比较,实验结果表明,Dis-KPCA方法有效地对样本数据降维且将三个类别数据100%分开,因此,所提方法的降维精度更高。  相似文献   

19.
In this paper, we propose a novel method named Mixed Kernel CCA (MKCCA) to achieve easy yet accurate implementation of dimensionality reduction. MKCCA consists of two major steps. First, the high dimensional data space is mapped into the reproducing kernel Hilbert space (RKHS) rather than the Hilbert space, with a mixture of kernels, i.e. a linear combination between a local kernel and a global kernel. Meanwhile, a uniform design for experiments with mixtures is also introduced for model selection. Second, in the new RKHS, Kernel CCA is further improved by performing Principal Component Analysis (PCA) followed by CCA for effective dimensionality reduction. We prove that MKCCA can actually be decomposed into two separate components, i.e. PCA and CCA, which can be used to better remove noises and tackle the issue of trivial learning existing in CCA or traditional Kernel CCA. After this, the proposed MKCCA can be implemented in multiple types of learning, such as multi-view learning, supervised learning, semi-supervised learning, and transfer learning, with the reduced data. We show its superiority over existing methods in different types of learning by extensive experimental results.  相似文献   

20.
Multiple kernel learning (MKL) has recently become a hot topic in kernel methods. However, many MKL algorithms suffer from high computational cost. Moreover, standard MKL algorithms face the challenge of the rapid development of distributed computational environment such as cloud computing. In this study, a framework for parallel multiple kernel learning (PMKL) using hybrid alternating direction method of multipliers (H-ADMM) is developed to integrate the MKL algorithms and the multiprocessor system. The global problem with multiple kernel is divided into multiple local problems each of which is optimized in a local processor with a single kernel. An H-ADMM is proposed to make the local processors coordinate with each other to achieve the global optimal solution. The results of computational experiments show that PMKL exhibits high classification accuracy and fast computational speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号