首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
半监督图核降维方法   总被引:1,自引:0,他引:1       下载免费PDF全文
基于图结构的数据表示和分析,在机器学习领域正得到越来越广泛的关注。以往研究主要集中在为图数据定义一个度量其相似性关系的核函数即图核,一旦定义出图核,就可以用标准的支持向量机(SVM)来对图数据进行分类。将图核方法进行扩充,先利用核主成分分析(kPCA)对图核诱导的高维特征空间中的数据进行降维,得到与原始图数据相对应的低维向量表示的数据,然后对这些新得到的数据用传统机器学习方法进行分析;通过在kPCA中利用图数据中的成对约束形式的监督信息,得到基于图核的半监督降维方法。在MUTAG和PTC等标准图数据集上的实验结果验证了所提方法的有效性。  相似文献   

2.
The paper presents an empirical comparison of the most prominent nonlinear manifold learning techniques for dimensionality reduction in the context of high-dimensional microarray data classification. In particular, we assessed the performance of six methods: isometric feature mapping, locally linear embedding, Laplacian eigenmaps, Hessian eigenmaps, local tangent space alignment and maximum variance unfolding. Unlike previous studies on the subject, the experimental framework adopted in this work properly extends to dimensionality reduction the supervised learning paradigm, by regarding the test set as an out-of-sample set of new points which are excluded from the manifold learning process. This in order to avoid a possible overestimate of the classification accuracy which may yield misleading comparative results. The different empirical approach requires the use of a fast and effective out-of-sample embedding method for mapping new high-dimensional data points into an existing reduced space. To this aim we propose to apply multi-output kernel ridge regression, an extension of linear ridge regression based on kernel functions which has been recently presented as a powerful method for out-of-sample projection when combined with a variant of isometric feature mapping. Computational experiments on a wide collection of cancer microarray data sets show that classifiers based on Isomap, LLE and LE were consistently more accurate than those relying on HE, LTSA and MVU. In particular, under different experimental conditions LLE-based classifier emerged as the most effective method whereas Isomap algorithm turned out to be the second best alternative for dimensionality reduction.  相似文献   

3.
Support vector learning for fuzzy rule-based classification systems   总被引:11,自引:0,他引:11  
To design a fuzzy rule-based classification system (fuzzy classifier) with good generalization ability in a high dimensional feature space has been an active research topic for a long time. As a powerful machine learning approach for pattern recognition problems, the support vector machine (SVM) is known to have good generalization ability. More importantly, an SVM can work very well on a high- (or even infinite) dimensional feature space. This paper investigates the connection between fuzzy classifiers and kernel machines, establishes a link between fuzzy rules and kernels, and proposes a learning algorithm for fuzzy classifiers. We first show that a fuzzy classifier implicitly defines a translation invariant kernel under the assumption that all membership functions associated with the same input variable are generated from location transformation of a reference function. Fuzzy inference on the IF-part of a fuzzy rule can be viewed as evaluating the kernel function. The kernel function is then proven to be a Mercer kernel if the reference functions meet a certain spectral requirement. The corresponding fuzzy classifier is named positive definite fuzzy classifier (PDFC). A PDFC can be built from the given training samples based on a support vector learning approach with the IF-part fuzzy rules given by the support vectors. Since the learning process minimizes an upper bound on the expected risk (expected prediction error) instead of the empirical risk (training error), the resulting PDFC usually has good generalization. Moreover, because of the sparsity properties of the SVMs, the number of fuzzy rules is irrelevant to the dimension of input space. In this sense, we avoid the "curse of dimensionality." Finally, PDFCs with different reference functions are constructed using the support vector learning approach. The performance of the PDFCs is illustrated by extensive experimental results. Comparisons with other methods are also provided.  相似文献   

4.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

5.
Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it’s a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, nonlinear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimizing a ?1 regularization-related objective function. EKSR provides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projection (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discriminative power of sparse representation. Compared with several existing approaches, the proposed framework can reduce computational complexity and be more convenient in practice.  相似文献   

6.
Du  Dapeng  Chen  Jiawei  Li  Yuexiang  Ma  Kai  Wu  Gangshan  Zheng  Yefeng  Wang  Limin 《International Journal of Computer Vision》2022,130(11):2842-2857

Domain generalization aims to improve the generalization capacity of a model by leveraging useful information from the multi-domain data. However, learning an effective feature representation from such multi-domain data is challenging, due to the domain shift problem. In this paper, we propose an information gating strategy, termed cross-domain gating (CDG), to address this problem. Specifically, we try to distill the domain-invariant feature by adaptively muting the domain-related activations in the feature maps. This feature distillation process prevents the network from overfitting to the domain-related detailed information, and thereby improves the generalization ability of learned feature representation. Extensive experiments are conducted on three public datasets. The experimental results show that the proposed CDG training strategy can excellently enforce the network to exploit the intrinsic features of objects from the multi-domain data, and achieve a new state-of-the-art domain generalization performance on these benchmarks.

  相似文献   

7.
This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a random matrix following the Wishart distribution with a positive definite parameter matrix and a degree of freedom. This parameter matrix, in turn, has the inverted Wishart distribution (with a positive definite hyperparameter matrix) as its conjugate prior and the degree of freedom is equal to the dimensionality of the feature space induced by the target kernel. Resorting to a missing data problem, we devise an expectation-maximization (EM) algorithm to infer the missing data, parameter matrix and feature dimensionality in a maximum a posteriori (MAP) manner. Using different settings for the target kernel and hyperparameter matrices, our model can be applied to different types of learning problems. In particular, we consider its application in a semi-supervised learning setting and present two classification methods. Classification experiments are reported on some benchmark data sets with encouraging results. In addition, we also devise the EM algorithm for kernel matrix completion. Editor: Philip M. Long  相似文献   

8.
Image representations and feature selection for multimedia database search   总被引:3,自引:0,他引:3  
The success of a multimedia information system depends heavily on the way the data is represented. Although there are "natural" ways to represent numerical data, it is not clear what is a good way to represent multimedia data, such as images, video, or sound. We investigate various image representations where the quality of the representation is judged based on how well a system for searching through an image database can perform-although the same techniques and representations can be used for other types of object detection tasks or multimedia data analysis problems. The system is based on a machine learning method used to develop object detection models from example images that can subsequently be used for examples to detect-search-images of a particular object in an image database. As a base classifier for the detection task, we use support vector machines (SVM), a kernel based learning method. Within the framework of kernel classifiers, we investigate new image representations/kernels derived from probabilistic models of the class of images considered and present a new feature selection method which can be used to reduce the dimensionality of the image representation without significant losses in terms of the performance of the detection-search-system.  相似文献   

9.
为了解决高维数据在分类时导致的维数灾难,降维是数据预处理阶段的主要步骤。基于稀疏学习进行特征选择是目前的研究热点。针对现实中大量非线性可分问题,借助核技巧,将非线性可分的数据样本映射到核空间,以解决特征的非线性相似问题。进一步对核空间的数据样本进行稀疏重构,得到原数据在核空间的一种简洁的稀疏表达方式,然后构建相应的评分机制选择最优子集。受益于稀疏学习的自然判别能力,该算法能够选择出保持原始数据结构特性的"好"特征,从而降低学习模型的计算复杂度并提升分类精度。在标准UCI数据集上的实验结果表明,其性能上与同类算法相比平均可提高约5%。  相似文献   

10.
Gaussian fields (GF) have recently received considerable attention for dimension reduction and semi-supervised classification. In this paper we show how the GF framework can be used for semi-supervised regression on high-dimensional data. We propose an active learning strategy based on entropy minimization and a maximum likelihood model selection method. Furthermore, we show how a recent generalization of the LLE algorithm for correspondence learning can be cast into the GF framework, which obviates the need to choose a representation dimensionality.  相似文献   

11.
Derived from the traditional manifold learning algorithms, local discriminant analysis methods identify the underlying submanifold structures while employing discriminative information for dimensionality reduction. Mathematically, they can all be unified into a graph embedding framework with different construction criteria. However, such learning algorithms are limited by the curse-of-dimensionality if the original data lie on the high-dimensional manifold. Different from the existing algorithms, we consider the discriminant embedding as a kernel analysis approach in the sample space, and a kernel-view based discriminant method is proposed for the embedded feature extraction, where both PCA pre-processing and the pruning of data can be avoided. Extensive experiments on the high-dimensional data sets show the robustness and outstanding performance of our proposed method.  相似文献   

12.

In hyperspectral image (HSI) analysis, high-dimensional data may contain noisy, irrelevant and redundant information. To mitigate the negative effect from these information, feature selection is one of the useful solutions. Unsupervised feature selection is a data preprocessing technique for dimensionality reduction, which selects a subset of informative features without using any label information. Different from the linear models, the autoencoder is formulated to nonlinearly select informative features. The adjacency matrix of HSI can be constructed to extract the underlying relationship between each data point, where the latent representation of original data can be obtained via matrix factorization. Besides, a new feature representation can be also learnt from the autoencoder. For a same data matrix, different feature representations should consistently share the potential information. Motivated by these, in this paper, we propose a latent representation learning based autoencoder feature selection (LRLAFS) model, where the latent representation learning is used to steer feature selection for the autoencoder. To solve the proposed model, we advance an alternative optimization algorithm. Experimental results on three HSI datasets confirm the effectiveness of the proposed model.

  相似文献   

13.
The traditional multiple kernel learning (MKL) is usually based on implicit kernel mapping and adopts a certain combination of kernels instead of a single kernel. MKL has been demonstrated to have a significant advantage to the single-kernel learning. Although MKL sets different weights to different kernels, the weights are not changed over the whole input space. This weight setting might not been fit for those data with some underlying local distributions. In order to solve this problem, Gönen and Alpayd?n (2008) introduced a localizing gating model into the traditional MKL framework so as to assign different weights to a kernel in different regions of the input space. In this paper, we also integrate the localizing gating model into our previous work named MultiK-MHKS that is an effective multiple empirical kernel learning. In doing so, we can get multiple localized empirical kernel learning named MLEKL. Our contribution is that we first establish a localized formulation in the empirical kernel learning framework. The experimental results on benchmark data sets validate the effectiveness of the proposed MLEKL.  相似文献   

14.
Kernel methods and deep learning are two of the most currently remarkable machine learning techniques that have achieved great success in many applications. Kernel methods are powerful tools to capture nonlinear patterns behind data. They implicitly learn high (even infinite) dimensional nonlinear features in the reproducing kernel Hilbert space (RKHS) while making the computation tractable by leveraging the kernel trick. It is commonly agreed that the success of kernel methods is very much dependent on the choice of kernel. Multiple kernel learning (MKL) is one possible scheme that performs kernel combination and selection for a variety of learning tasks, such as classification, clustering, and dimensionality reduction. Deep learning models project input data through several layers of nonlinearity and learn different levels of abstraction. The composition of multiple layers of nonlinear functions can approximate a rich set of naturally occurring input-output dependencies. To bridge kernel methods and deep learning, deep kernel learning has been proven to be an effective method to learn complex feature representations by combining the nonparametric flexibility of kernel methods with the structural properties of deep learning. This article presents a comprehensive overview of the state-of-the-art approaches that bridge the MKL and deep learning techniques. Specifically, we systematically review the typical hybrid models, training techniques, and their theoretical and practical benefits, followed by remaining challenges and future directions. We hope that our perspectives and discussions serve as valuable references for new practitioners and theoreticians seeking to innovate in the applications of the approaches incorporating the advantages of both paradigms and exploring new synergies.  相似文献   

15.
航拍图像往往具有场景复杂、数据维度大的特点,对于该类图像的自动分类一直是研究的热点。针对航拍原始数据特征维度过高和数据线性不可分的问题,在字典学习和稀疏表示的基础上提出了一种结合核字典学习和线性鉴别分析的目标识别方法。首先学习核字典并通过核字典获取目标样本的稀疏表示,挖掘数据的内部结构;其次采用线性鉴别分析,加强稀疏表示的可分性;最后利用支持向量机对目标进行分类。实验结果表明,与传统基于子空间特征提取的算法和基于字典学习的算法相比,基于核字典学习与鉴别分析的算法分类性能优越。  相似文献   

16.
This paper focuses on the problem of how data representation influences the generalization error of kernel based learning machines like support vector machines (SVM) for classification. Frame theory provides a well founded mathematical framework for representing data in many different ways. We analyze the effects of sparse and dense data representations on the generalization error of such learning machines measured by using leave-one-out error given a finite amount of training data. We show that, in the case of sparse data representations, the generalization error of an SVM trained by using polynomial or Gaussian kernel functions is equal to the one of a linear SVM. This is equivalent to saying that the capacity of separating points of functions belonging to hypothesis spaces induced by polynomial or Gaussian kernel functions reduces to the capacity of a separating hyperplane in the input space. Moreover, we show that, in general, sparse data representations increase or leave unchanged the generalization error of kernel based methods. Dense data representations, on the contrary, reduce the generalization error in the case of very large frames. We use two different schemes for representing data in overcomplete systems of Haar and Gabor functions, and measure SVM generalization error on benchmarked data sets.  相似文献   

17.

Large scale online kernel learning aims to build an efficient and scalable kernel-based predictive model incrementally from a sequence of potentially infinite data points. Current state-of-the-art large scale online kernel learning focuses on improving efficiency. Two key approaches to gain efficiency through approximation are (1) limiting the number of support vectors, and (2) using an approximate feature map. They often employ a kernel with a feature map with intractable dimensionality. While these approaches can deal with large scale datasets efficiently, this outcome is achieved by compromising predictive accuracy because of the approximation. We offer an alternative approach that puts the kernel used at the heart of the approach. It focuses on creating a sparse and finite-dimensional feature map of a kernel called Isolation Kernel. Using this new approach, to achieve the above aim of large scale online kernel learning becomes extremely simple—simply use Isolation Kernel instead of a kernel having a feature map with intractable dimensionality. We show that, using Isolation Kernel, large scale online kernel learning can be achieved efficiently without sacrificing accuracy.

  相似文献   

18.
In this paper, we propose a novel method named Mixed Kernel CCA (MKCCA) to achieve easy yet accurate implementation of dimensionality reduction. MKCCA consists of two major steps. First, the high dimensional data space is mapped into the reproducing kernel Hilbert space (RKHS) rather than the Hilbert space, with a mixture of kernels, i.e. a linear combination between a local kernel and a global kernel. Meanwhile, a uniform design for experiments with mixtures is also introduced for model selection. Second, in the new RKHS, Kernel CCA is further improved by performing Principal Component Analysis (PCA) followed by CCA for effective dimensionality reduction. We prove that MKCCA can actually be decomposed into two separate components, i.e. PCA and CCA, which can be used to better remove noises and tackle the issue of trivial learning existing in CCA or traditional Kernel CCA. After this, the proposed MKCCA can be implemented in multiple types of learning, such as multi-view learning, supervised learning, semi-supervised learning, and transfer learning, with the reduced data. We show its superiority over existing methods in different types of learning by extensive experimental results.  相似文献   

19.
齐忍  朱鹏飞  梁建青 《软件学报》2017,28(11):2992-3001
在机器学习和模式识别任务中,选择一种合适的距离度量方法是至关重要的.度量学习主要利用判别性信息学习一个马氏距离或相似性度量.然而,大多数现有的度量学习方法都是针对数值型数据的,对于一些有结构的数据(比如符号型数据),用传统的距离度量来度量两个对象之间的相似性是不合理的;其次,大多数度量学习方法会受到维度的困扰,高维度使得训练时间长,模型的可扩展性差.提出了一种基于几何平均的混杂数据度量学习方法.采用不同的核函数将数值型数据和符号型数据分别映射到可再生核希尔伯特空间,从而避免了特征的高维度带来的负面影响.同时,提出了一个基于几何平均的多核度量学习模型,将混杂数据的度量学习问题转化为求黎曼流形上两个点的中心点问题.在UCI数据集上的实验结果表明,针对混杂数据的多核度量学习方法与现有的度量学习方法相比,在准确性方面展现出更优异的性能.  相似文献   

20.
The curse of dimensionality has prompted intensive research in effective methods of mapping high dimensional data. Dimensionality reduction and subspace learning have been studied extensively and widely applied to feature extraction and pattern representation in image and vision applications. Although PCA has long been regarded as a simple, efficient linear subspace technique, many nonlinear methods such as kernel PCA, local linear embedding, and self-organizing networks have been proposed recently for dealing with increasingly complex nonlinear data. The intensive research in nonlinear methods often creates an impression that they are highly superior and preferred, though often limited experiments were given and the results not tested on significance. In this paper, we systematically investigate and compare the capabilities of various linear and nonlinear subspace methods for face representation and recognition. The performances of these methods are analyzed and discussed along with statistical significance tests on obtained results. The experiments on a range of data sets show that nonlinear methods do not always outperform linear ones, especially on data sets containing noise and outliers or having discontinuous or multiple submanifolds. Certain nonlinear methods with certain classifiers do yield better performances consistently than others. However, the differences among them are small and in most cases are not significant. A measure is used to quantify the nonlinearity of a data set in a subspace. It explains that good performances are achievable in reduced dimensions of low degree of nonlinearity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号