首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

2.
多核学习(MKL)方法在分类及回归任务中均取得了优于单核学习方法的性能,但传统的MKL方法均用于处理两类或多类分类问题.为了使MKL方法适用于处理单类分类(OCC)问题,提出了基于中心核对齐(CKA)的单类支持向量机(OCSVM).首先利用CKA计算每个核矩阵的权重,然后将所得权重用作线性组合系数,进而将不同类型的核函...  相似文献   

3.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

4.
The advantage of a kernel method often depends critically on a proper choice of the kernel function. A promising approach is to learn the kernel from data automatically. In this paper, we propose a novel method for learning the kernel matrix based on maximizing a class separability criterion that is similar to those used by linear discriminant analysis (LDA) and kernel Fisher discriminant (KFD). It is interesting to note that optimizing this criterion function does not require inverting the possibly singular within-class scatter matrix which is a computational problem encountered by many LDA and KFD methods. We have conducted experiments on both synthetic data and real-world data from UCI and FERET, showing that our method consistently outperforms some previous kernel learning methods.  相似文献   

5.
Unsupervised feature extraction via kernel subspace techniques   总被引:1,自引:0,他引:1  
This paper provides a new insight into unsupervised feature extraction techniques based on kernel subspace models. The data projected onto kernel subspace models are new data representations which might be better suited for classification. The kernel subspace models are always described exploiting the dual form for the basis vectors which requires that the training data must be available even during the test phase. By exploiting an incomplete Cholesky decomposition of the kernel matrix, a computationally less demanding implementation is proposed. Online benchmark data sets allow the evaluation of these feature extraction methods comparing the performance of two classifiers which both have as input either the raw data or the new representations.  相似文献   

6.
现有核聚类算法需要学习完整的核矩阵,计算效率较低,仅适用于小规模数据,对此提出了基于图谱理论的核函数分类算法。首先,基于图谱理论建立无标记数据的相似度图;然后,计算其拉普拉斯矩阵,并选取拉普拉斯矩阵的部分特征向量来学习;最终,利用核k-means算法实现数据分类。对比试验结果证明,本算法在具有较好聚类性能的前提下,计算效率明显优于其他同类型算法,并适合中型及大型规模数据分类处理。  相似文献   

7.
This paper presents a novel algorithm to optimize the Gaussian kernel for pattern classification tasks, where it is desirable to have well-separated samples in the kernel feature space. We propose to optimize the Gaussian kernel parameters by maximizing a classical class separability criterion, and the problem is solved through a quasi-Newton algorithm by making use of a recently proposed decomposition of the objective criterion. The proposed method is evaluated on five data sets with two kernel-based learning algorithms. The experimental results indicate that it achieves the best overall classification performance, compared with three competing solutions. In particular, the proposed method provides a valuable kernel optimization solution in the severe small sample size scenario.  相似文献   

8.
A common approach in structural pattern classification is to define a dissimilarity measure on patterns and apply a distance-based nearest-neighbor classifier. In this paper, we introduce an alternative method for classification using kernel functions based on edit distance. The proposed approach is applicable to both string and graph representations of patterns. By means of the kernel functions introduced in this paper, string and graph classification can be performed in an implicit vector space using powerful statistical algorithms. The validity of the kernel method cannot be established for edit distance in general. However, by evaluating theoretical criteria we show that the kernel functions are nevertheless suitable for classification, and experiments on various string and graph datasets clearly demonstrate that nearest-neighbor classifiers can be outperformed by support vector machines using the proposed kernel functions.  相似文献   

9.
Small sample size and high computational complexity are two major problems encountered when traditional kernel discriminant analysis methods are applied to high-dimensional pattern classification tasks such as face recognition. In this paper, we introduce a new kernel discriminant learning method, which is able to effectively address the two problems by using regularization and subspace decomposition techniques. Experiments performed on real face databases indicate that the proposed method outperforms, in terms of classification accuracy, existing kernel methods, such as kernel principal component analysis and kernel linear discriminant analysis, at a significantly reduced computational cost.  相似文献   

10.
A new nonlinear dimensionality reduction method called kernel global–local preserving projections (KGLPP) is developed and applied for fault detection. KGLPP has the advantage of preserving global and local data structures simultaneously. The kernel principal component analysis (KPCA), which only preserves the global Euclidean structure of data, and the kernel locality preserving projections (KLPP), which only preserves the local neighborhood structure of data, are unified in the KGLPP framework. KPCA and KLPP can be easily derived from KGLPP by choosing some particular values of parameters. As a result, KGLPP is more powerful than KPCA and KLPP in capturing useful data characteristics. A KGLPP-based monitoring method is proposed for nonlinear processes. T2 and SPE statistics are constructed in the feature space for fault detection. Case studies in a nonlinear system and in the Tennessee Eastman process demonstrate that the KGLPP-based method significantly outperforms KPCA, KLPP and GLPP-based methods, in terms of higher fault detection rates and better fault sensitivity.  相似文献   

11.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

12.
Shape from focus (SFF) is one of the optical passive methods for three dimensional (3D) shape recovery of an object from its two dimensional (2D) images. The focus measure plays important role in SFF algorithms. Mostly, conventional focus measures are based on gradient, so their performance is restricted under noisy conditions. Moreover, SFF methods also suffer from loss of focus information due to discreteness. This paper introduces a new SFF method based on principal component analysis (PCA) and kernel regression. The focus values are computed through PCA by considering a sequence of small 3D neighborhood for each object point. We apply unsupervised regression through Nadaraya and Watson Estimate (NWE) on depth values to get a refined 3D shape of the object. It reduces the effect of noise within a small surface area as well as approximates the accurate 3D shape by exploiting the depth dependencies in the neighborhood. Performance of the proposed scheme is investigated in the presence of different types of noises and textured areas. Experimental results demonstrate effectiveness of the proposed approach.  相似文献   

13.
The Nadaraya–Watson estimator, also known as kernel regression, is a density-based regression technique. It weights output values with the relative densities in input space. The density is measured with kernel functions that depend on bandwidth parameters. In this work we present an evolutionary bandwidth optimizer for kernel regression. The approach is based on a robust loss function, leave-one-out cross-validation, and the CMSA-ES as optimization engine. A variant with local parameterized Nadaraya–Watson models enhances the approach, and allows the adaptation of the model to local data space characteristics. The unsupervised counterpart of kernel regression is an approach to learn principal manifolds. The learning problem of unsupervised kernel regression (UKR) is based on optimizing the latent variables, which is a multimodal problem with many local optima. We propose an evolutionary framework for optimization of UKR based on scaling of initial local linear embedding solutions, and minimization of the cross-validation error. Both methods are analyzed experimentally.  相似文献   

14.
The main goal of this paper is to prove inequalities on the reconstruction error for kernel principal component analysis. With respect to previous work on this topic, our contribution is twofold: (1) we give bounds that explicitly take into account the empirical centering step in this algorithm, and (2) we show that a “localized” approach allows to obtain more accurate bounds. In particular, we show faster rates of convergence towards the minimum reconstruction error; more precisely, we prove that the convergence rate can typically be faster than n −1/2. We also obtain a new relative bound on the error. A secondary goal, for which we present similar contributions, is to obtain convergence bounds for the partial sums of the biggest or smallest eigenvalues of the kernel Gram matrix towards eigenvalues of the corresponding kernel operator. These quantities are naturally linked to the KPCA procedure; furthermore these results can have applications to the study of various other kernel algorithms. The results are presented in a functional analytic framework, which is suited to deal rigorously with reproducing kernel Hilbert spaces of infinite dimension. Editor: Nicolo Cesa-Bianchi An erratum to this article is available at .  相似文献   

15.
Block-wise 2D kernel PCA/LDA for face recognition   总被引:1,自引:0,他引:1  
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.  相似文献   

16.
Motivated by the goal of hardening operating system kernels against rootkits and related malware, we survey the common interfaces and methods which can be used to modify (either legitimately or maliciously) the kernel which is run on a commodity desktop computer. We also survey how these interfaces can be restricted or disabled. While we concentrate mainly on Linux, many of the methods for modifying kernel code also exist on other operating systems, some of which are discussed.  相似文献   

17.
Kernel methods have been used for various supervised learning tasks. In this paper, we present a new clustering method based on kernel density. The method does not make any assumption on the number of clusters or on their shapes. The method is simple, robust, and behaves equally or better than other methods on problems known as difficult.  相似文献   

18.
Kernel principal component analysis (KPCA) and kernel linear discriminant analysis (KLDA) are two commonly used and effective methods for dimensionality reduction and feature extraction. In this paper, we propose a KLDA method based on maximal class separability for extracting the optimal features of analog fault data sets, where the proposed KLDA method is compared with principal component analysis (PCA), linear discriminant analysis (LDA) and KPCA methods. Meanwhile, a novel particle swarm optimization (PSO) based algorithm is developed to tune parameters and structures of neural networks jointly. Our study shows that KLDA is overall superior to PCA, LDA and KPCA in feature extraction performance and the proposed PSO-based algorithm has the properties of convenience of implementation and better training performance than Back-propagation algorithm. The simulation results demonstrate the effectiveness of these methods.  相似文献   

19.
近年来,多核图聚类(MKGC)受到了广泛的关注,这得益于多核学习能有效地避免核函数与核参数的选择,而图聚类能充分挖掘样本间的复杂结构信息。然而现有的MKGC方法存在着如下问题:图学习技术使得模型复杂化,图拉普拉斯矩阵的高秩特性使其难以保证学到的关系图包含精确的c个连通分量(块对角性质),以及大部分方法忽略了候选关系图间的高阶结构信息,使得多核信息难以被充分利用。针对以上问题,提出了一种新的MKGC方法。首先,提出一种新的上界单纯形投影图学习方法,直接将核矩阵投影到图单纯形上,降低了计算复杂度;同时,引入一种新的块对角约束,使学到的关系图能保持精确的块对角属性;此外,在上界单纯形投影空间中引入低秩张量学习来充分挖掘多个候选关系图的高阶结构信息。在多个数据集上与现有的MKGC方法相比,所提出方法计算量小、稳定性高,在聚类精度(ACC)和标准互信息(NMI)指标上具有较大的优势。  相似文献   

20.
Invariant kernel functions for pattern analysis and machine learning   总被引:1,自引:0,他引:1  
In many learning problems prior knowledge about pattern variations can be formalized and beneficially incorporated into the analysis system. The corresponding notion of invariance is commonly used in conceptionally different ways. We propose a more distinguishing treatment in particular in the active field of kernel methods for machine learning and pattern analysis. Additionally, the fundamental relation of invariant kernels and traditional invariant pattern analysis by means of invariant representations will be clarified. After addressing these conceptional questions, we focus on practical aspects and present two generic approaches for constructing invariant kernels. The first approach is based on a technique called invariant integration. The second approach builds on invariant distances. In principle, our approaches support general transformations in particular covering discrete and non-group or even an infinite number of pattern-transformations. Additionally, both enable a smooth interpolation between invariant and non-invariant pattern analysis, i.e. they are a covering general framework. The wide applicability and various possible benefits of invariant kernels are demonstrated in different kernel methods. Editor: Phil Long.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号