共查询到10条相似文献,搜索用时 78 毫秒
1.
Approaches to distance metric learning (DML) for Mahalanobis distance metric involve estimating a parametric matrix that is associated with a linear transformation. For complex pattern analysis tasks, it is necessary to consider the approaches to DML that involve estimating a parametric matrix that is associated with a nonlinear transformation. One such approach involves performing the DML of Mahalanobis distance in the feature space of a Mercer kernel. In this approach, the problem of estimation of a parametric matrix of Mahalanobis distance is formulated as a problem of learning an optimal kernel gram matrix from the kernel gram matrix of a base kernel by minimizing the logdet divergence between the kernel gram matrices. We propose to use the optimal kernel gram matrices learnt from the kernel gram matrix of the base kernels in pattern analysis tasks such as clustering, multi-class pattern classification and nonlinear principal component analysis. We consider the commonly used kernels such as linear kernel, polynomial kernel, radial basis function kernel and exponential kernel as well as hyper-ellipsoidal kernels as the base kernels for optimal kernel learning. We study the performance of the DML-based class-specific kernels for multi-class pattern classification using support vector machines. Results of our experimental studies on benchmark datasets demonstrate the effectiveness of the DML-based kernels for different pattern analysis tasks. 相似文献
2.
Shiju S. S. 《International journal of systems science》2017,48(16):3569-3580
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model. 相似文献
3.
多核学习方法是机器学习领域中的一个新的热点。核方法通过将数据映射到高维空间来增加线性分类器的计算能力,是目前解决非线性模式分析与分类问题的一种有效途径。但是在一些复杂的情况下,单个核函数构成的核学习方法并不能完全满足如数据异构或者不规则、样本规模大、样本分布不平坦等实际应用中的需求问题,因此将多个核函数进行组合以期获得更好的结果,是一种必然的发展趋势。因此提出一种基于样本加权的多尺度核支持向量机方法,通过不同尺度核函数对样本的拟合能力进行加权,从而得到基于样本加权的多尺度核支持向量机决策函数。通过在多个数据集上的实验分析可以得出所提方法对于各个数据集都获得了很高的分类准确率。 相似文献
4.
This paper addresses the problem of optimal feature extraction from a wavelet representation. Our work aims at building features by selecting wavelet coefficients resulting from signal or image decomposition on an adapted wavelet basis. For this purpose, we jointly learn in a kernelized large-margin context the wavelet shape as well as the appropriate scale and translation of the wavelets, hence the name “wavelet kernel learning”. This problem is posed as a multiple kernel learning problem, where the number of kernels can be very large. For solving such a problem, we introduce a novel multiple kernel learning algorithm based on active constraints methods. We furthermore propose some variants of this algorithm that can produce approximate solutions more efficiently. Empirical analysis show that our active constraint MKL algorithm achieves state-of-the art efficiency. When used for wavelet kernel learning, our experimental results show that the approaches we propose are competitive with respect to the state-of-the-art on brain–computer interface and Brodatz texture datasets. 相似文献
5.
6.
7.
Constrained clustering methods (that usually use must-link and/or cannot-link constraints) have been received much attention in the last decade. Recently, kernel adaptation or kernel learning has been considered as a powerful approach for constrained clustering. However, these methods usually either allow only special forms of kernels or learn non-parametric kernel matrices and scale very poorly. Therefore, they either learn a metric that has low flexibility or are applicable only on small data sets due to their high computational complexity. In this paper, we propose a more efficient non-linear metric learning method that learns a low-rank kernel matrix from must-link and cannot-link constraints and the topological structure of data. We formulate the proposed method as a trace ratio optimization problem and learn appropriate distance metrics through finding optimal low-rank kernel matrices. We solve the proposed optimization problem much more efficiently than SDP solvers. Additionally, we show that the spectral clustering methods can be considered as a special form of low-rank kernel learning methods. Extensive experiments have demonstrated the superiority of the proposed method compared to recently introduced kernel learning methods. 相似文献
8.
基于核方法的降维技术和流形学习是两类有效而广泛应用的非线性降维技术,它们有着各自不同的出发点和理论基础,在以往的研究中很少有研究关注两者的联系。LTSA算法利用数据的局部结构构造一种特殊的核矩阵,然后利用该核矩阵进行核主成分分析。本文针对局部切空间对齐这种流形学习算法,重点研究了LTSA算法与核PCA的内在联系。研究表明,LTSA在本质上是一种基于核方法的主成分分析技术。 相似文献
9.
Kernel learning is widely used in many areas, and many methods are developed. As a famous kernel learning method, kernel principal component analysis (KPCA) endures two problems in the practical applications. One is that all training samples need to be stored for the computing the kernel matrix during kernel learning. Second is that the kernel and its parameter have the heavy influence on the performance of kernel learning. In order to solve the above problem, we present a novel kernel learning namely sparse data-dependent kernel principal component analysis through reducing the training samples with sparse learning-based least squares support vector machine and adaptive self-optimizing kernel structure according to the input training samples. Experimental results on UCI datasets, ORL and YALE face databases, and Wisconsin Breast Cancer database show that it is feasible to improve KPCA on saving consuming space and optimizing kernel structure. 相似文献
10.
Minyoung Kim 《Applied Intelligence》2013,38(1):45-57
Kernel machines such as Support Vector Machines (SVM) have exhibited successful performance in pattern classification problems mainly due to their exploitation of potentially nonlinear affinity structures of data through the kernel functions. Hence, selecting an appropriate kernel function, equivalently learning the kernel parameters accurately, has a crucial impact on the classification performance of the kernel machines. In this paper we consider the problem of learning a kernel matrix in a binary classification setup, where the hypothesis kernel family is represented as a convex hull of fixed basis kernels. While many existing approaches involve computationally intensive quadratic or semi-definite optimization, we propose novel kernel learning algorithms based on large margin estimation of Parzen window classifiers. The optimization is cast as instances of linear programming. This significantly reduces the complexity of the kernel learning compared to existing methods, while our large margin based formulation provides tight upper bounds on the generalization error. We empirically demonstrate that the new kernel learning methods maintain or improve the accuracy of the existing classification algorithms while significantly reducing the learning time on many real datasets in both supervised and semi-supervised settings. 相似文献