首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we consider the multi-task metric learning problem, i.e., the problem of learning multiple metrics from several correlated tasks simultaneously. Despite the importance, there are only a limited number of approaches in this field. While the existing methods often straightforwardly extend existing vector-based methods, we propose to couple multiple related metric learning tasks with the von Neumann divergence. On one hand, the novel regularized approach extends previous methods from the vector regularization to a general matrix regularization framework; on the other hand and more importantly, by exploiting von Neumann divergence as the regularization, the new multi-task metric learning method has the capability to well preserve the data geometry. This leads to more appropriate propagation of side-information among tasks and provides potential for further improving the performance. We propose the concept of geometry preserving probability and show that our framework encourages a higher geometry preserving probability in theory. In addition, our formulation proves to be jointly convex and the global optimal solution can be guaranteed. We have conducted extensive experiments on six data sets (across very different disciplines), and the results verify that our proposed approach can consistently outperform almost all the current methods.  相似文献   

2.
Dimensionality reduction has many applications in pattern recognition, machine learning and computer vision. In this paper, we develop a general regularization framework for dimensionality reduction by allowing the use of different functions in the cost function. This is especially important as we can achieve robustness in the presence of outliers. It is shown that optimizing the regularized cost function is equivalent to solving a nonlinear eigenvalue problem under certain conditions, which can be handled by the self-consistent field (SCF) iteration. Moreover, this regularization framework is applicable in unsupervised or supervised learning by defining the regularization term which provides some types of prior knowledge of projected samples or projected vectors. It is also noted that some linear projection methods can be obtained from this framework by choosing different functions and imposing different constraints. Finally, we show some applications of our framework by various data sets including handwritten characters, face images, UCI data, and gene expression data.  相似文献   

3.
Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.  相似文献   

4.
Ma  Xueqi  Tao  Dapeng  Liu  Weifeng 《Multimedia Tools and Applications》2019,78(10):13313-13329

The ever-growing popularity of mobile networks and electronics has prompted intensive research on multimedia data (e.g. text, image, video, audio, etc.) management. This leads to the researches of semi-supervised learning that can incorporate a small number of labeled and a large number of unlabeled data by exploiting the local structure of data distribution. Manifold regularization and pairwise constraints are representative semi-supervised learning methods. In this paper, we introduce a novel local structure preserving approach by considering both manifold regularization and pairwise constraints. Specifically, we construct a new graph Laplacian that takes advantage of pairwise constraints compared with the traditional Laplacian. The proposed graph Laplacian can better preserve the local geometry of data distribution and achieve the effective recognition. Upon this, we build the graph regularized classifiers including support vector machines and kernel least squares as special cases for action recognition. Experimental results on a multimodal human action database (CAS-YNU-MHAD) show that our proposed algorithms outperform the general algorithms.

  相似文献   

5.
Linear discriminant analysis (LDA) is one of the most popular techniques for extracting features in face recognition. LDA captures the global geometric structure. However, local geometric structure has recently been shown to be effective for face recognition. In this paper, we propose a novel feature extraction algorithm which integrates both global and local geometric structures. We first cast LDA as a least square problem based on the spectral regression, then regularization technique is used to model the global and local geometric structures. Furthermore, we impose penalty on parameters to tackle the singularity problem and design an efficient model selection algorithm to choose the optimal tuning parameter which balances the tradeoff between the global and local structures. Experimental results on four well-known face data sets show that the proposed integration framework is competitive with traditional face recognition algorithms, which use either global or local structure only.  相似文献   

6.
In this paper, we propose a new variational framework to solve the Gaussian mixture model (GMM) based methods for image segmentation by employing the convex relaxation approach. After relaxing the indicator function in GMM, flexible spatial regularization can be adopted and efficient segmentation can be achieved. To demonstrate the superiority of the proposed framework, the global, local intensity information and the spatial smoothness are integrated into a new model, and it can work well on images with inhomogeneous intensity and noise. Compared to classical GMM, numerical experiments have demonstrated that our algorithm can achieve promising segmentation performance for images degraded by intensity inhomogeneity and noise.  相似文献   

7.
为了在揭示数据全局结构的同时保留其局部结构,本文将特征自表达和图正则化统一到同一框架中,给出了一种新的无监督特征选择(unsupervised feature selection,UFS)模型与方法。模型使用特征自表达,用其余特征线性表示每一个特征,以保持特征的局部结构;用基于 ${L_{2, 1}}$ 范数的图正则化项,在保留数据的局部几何结构的同时可以降低噪声数据对特征选择的影响;除此之外,在权重矩阵上施加了低秩约束,保留数据的全局结构。在6个不同的公开数据集上的实验表明,所给算法明显优于其他5个对比算法,表明了所提出的UFS框架的有效性。  相似文献   

8.
We present a novel approach for retrieval of object categories based on a novel type of image representation: the Generalized Correlogram (GC). In our image representation, the object is described as a constellation of GCs where each one encodes information about some local part and the spatial relations from this part to others (i.e., the part's context). We show how such a representation can be used with fast procedures that learn the object category with weak supervision and efficiently match the model of the object against large collections of images. In the learning stage, we show that by integrating our representation with Boosting the system is able to obtain a compact model that is represented by very few features, where each feature conveys key properties about the object's parts and their spatial arrangement. In the matching step, we propose direct procedures that exploit our representation for efficiently considering spatial coherence between the matching of local parts. Combined with an appropriate data organization such as Inverted Files, we show that thousands of images can be evaluated efficiently. The framework has been applied to different standard databases and we show that our results are favorably compared against state-of-the-art methods in both computational cost and accuracy.  相似文献   

9.
目的 人体目标再识别的任务是匹配不同摄像机在不同时间、地点拍摄的人体目标。受光照条件、背景、遮挡、视角和姿态等因素影响,不同摄相机下的同一目标表观差异较大。目前研究主要集中在特征表示和度量学习两方面。很多度量学习方法在人体目标再识别问题上了取得了较好的效果,但对于多样化的数据集,单一的全局度量很难适应差异化的特征。对此,有研究者提出了局部度量学习,但这些方法通常需要求解复杂的凸优化问题,计算繁琐。方法 利用局部度量学习思想,结合近几年提出的XQDA(cross-view quadratic discriminant analysis)和MLAPG(metric learning by accelerated proximal gradient)等全局度量学习方法,提出了一种整合全局和局部度量学习框架。利用高斯混合模型对训练样本进行聚类,在每个聚类内分别进行局部度量学习;同时在全部训练样本集上进行全局度量学习。对于测试样本,根据样本在高斯混合模型各个成分下的后验概率将局部和全局度量矩阵加权结合,作为衡量相似性的依据。特别地,对于MLAPG算法,利用样本在各个高斯成分下的后验概率,改进目标损失函数中不同样本的损失权重,进一步提高该方法的性能。结果 在VIPeR、PRID 450S和QMUL GRID数据集上的实验结果验证了提出的整合全局—局部度量学习方法的有效性。相比于XQDA和MLAPG等全局方法,在VIPeR数据集上的匹配准确率提高2.0%左右,在其他数据集上的性能也有不同程度的提高。另外,利用不同的特征表示对提出的方法进行实验验证,相比于全局方法,匹配准确率提高1.3%~3.4%左右。结论 有效地整合了全局和局部度量学习方法,既能对多种全局度量学习算法的性能做出改进,又能避免局部度量学习算法复杂的计算过程。实验结果表明,对于使用不同的特征表示,提出的整合全局—局部度量学习框架均可对全局度量学习方法做出改进。  相似文献   

10.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

11.
In this paper, we propose a novel method for fast face recognition called L 1/2-regularized sparse representation using hierarchical feature selection. By employing hierarchical feature selection, we can compress the scale and dimension of global dictionary, which directly contributes to the decrease of computational cost in sparse representation that our approach is strongly rooted in. It consists of Gabor wavelets and extreme learning machine auto-encoder (ELM-AE) hierarchically. For Gabor wavelets’ part, local features can be extracted at multiple scales and orientations to form Gabor-feature-based image, which in turn improves the recognition rate. Besides, in the presence of occluded face image, the scale of Gabor-feature-based global dictionary can be compressed accordingly because redundancies exist in Gabor-feature-based occlusion dictionary. For ELM-AE part, the dimension of Gabor-feature-based global dictionary can be compressed because high-dimensional face images can be rapidly represented by low-dimensional feature. By introducing L 1/2 regularization, our approach can produce sparser and more robust representation compared to L 1-regularized sparse representation-based classification (SRC), which also contributes to the decrease of the computational cost in sparse representation. In comparison with related work such as SRC and Gabor-feature-based SRC, experimental results on a variety of face databases demonstrate the great advantage of our method for computational cost. Moreover, we also achieve approximate or even better recognition rate.  相似文献   

12.
In real-world applications, we often have to deal with some high-dimensional, sparse, noisy, and non-independent identically distributed data. In this paper, we aim to handle this kind of complex data in a transfer learning framework, and propose a robust non-negative matrix factorization via joint sparse and graph regularization model for transfer learning. First, we employ robust non-negative matrix factorization via sparse regularization model (RSNMF) to handle source domain data and then learn a meaningful matrix, which contains much common information between source domain and target domain data. Second, we treat this learned matrix as a bridge and transfer it to target domain. Target domain data are reconstructed by our robust non-negative matrix factorization via joint sparse and graph regularization model (RSGNMF). Third, we employ feature selection technique on new sparse represented target data. Fourth, we provide novel efficient iterative algorithms for RSNMF model and RSGNMF model and also give rigorous convergence and correctness analysis separately. Finally, experimental results on both text and image data sets demonstrate that our REGTL model outperforms existing start-of-art methods.  相似文献   

13.
In this paper, we focus on techniques for vector-valued image regularization, based on variational methods and PDE. Starting from the study of PDE-based formalisms previously proposed in the literature for the regularization of scalar and vector-valued data, we propose a unifying expression that gathers the majority of these previous frameworks into a single generic anisotropic diffusion equation. On one hand, the resulting expression provides a simple interpretation of the regularization process in terms of local filtering with spatially adaptive Gaussian kernels. On the other hand, it naturally disassembles any regularization scheme into the smoothing process itself and the underlying geometry that drives the smoothing. Thus, we can easily specialize our generic expression into different regularization PDE that fulfill desired smoothing behaviors, depending on the considered application: image restoration, inpainting, magnification, flow visualization, etc. Specific numerical schemes are also proposed, allowing us to implement our regularization framework with accuracy by taking the local filtering properties of the proposed equations into account. Finally, we illustrate the wide range of applications handled by our selected anisotropic diffusion equations with application results on color images.  相似文献   

14.
In this paper, we present a locality-constrained nonnegative robust shape interaction (LNRSI) subspace clustering method. LNRSI integrates the local manifold structure of data into the robust shape interaction (RSI) in a unified formulation, which guarantees the locality and the low-rank property of the optimal affinity graph. Compared with traditional low-rank representation (LRR) learning method, LNRSI can not only pursuit the global structure of data space by low-rank regularization, but also keep the locality manifold, which leads to a sparse and low-rank affinity graph. Due to the clear block-diagonal effect of the affinity graph, LNRSI is robust to noise and occlusions, and achieves a higher rate of correct clustering. The theoretical analysis of the clustering effect is also discussed. An efficient solution based on linearized alternating direction method with adaptive penalty (LADMAP) is built for our method. Finally, we evaluate the performance of LNRSI on both synthetic data and real computer vision tasks, i.e., motion segmentation and handwritten digit clustering. The experimental results show that our LNRSI outperforms several state-of-the-art algorithms.  相似文献   

15.
16.
The problem of clustering with side information has received much recent attention and metric learning has been considered as a powerful approach to this problem. Until now, various metric learning methods have been proposed for semi-supervised clustering. Although some of the existing methods can use both positive (must-link) and negative (cannot-link) constraints, they are usually limited to learning a linear transformation (i.e., finding a global Mahalanobis metric). In this paper, we propose a framework for learning linear and non-linear transformations efficiently. We use both positive and negative constraints and also the intrinsic topological structure of data. We formulate our metric learning method as an appropriate optimization problem and find the global optimum of this problem. The proposed non-linear method can be considered as an efficient kernel learning method that yields an explicit non-linear transformation and thus shows out-of-sample generalization ability. Experimental results on synthetic and real-world data sets show the effectiveness of our metric learning method for semi-supervised clustering tasks.  相似文献   

17.
The dynamic ensemble selection of classifiers is an effective approach for processing label-imbalanced data classifications. However, such a technique is prone to overfitting, owing to the lack of regularization methods and the dependence on local geometry of data. In this study, focusing on binary imbalanced data classification, a novel dynamic ensemble method, namely adaptive ensemble of classifiers with regularization (AER), is proposed, to overcome the stated limitations. The method solves the overfitting problem through a new perspective of implicit regularization. Specifically, it leverages the properties of stochastic gradient descent to obtain the solution with the minimum norm, thereby achieving regularization; furthermore, it interpolates the ensemble weights by exploiting the global geometry of data to further prevent overfitting. According to our theoretical proofs, the seemingly complicated AER paradigm, in addition to its regularization capabilities, can actually reduce the asymptotic time and memory complexities of several other algorithms. We evaluate the proposed AER method on seven benchmark imbalanced datasets from the UCI machine learning repository and one artificially generated GMM-based dataset with five variations. The results show that the proposed algorithm outperforms the major existing algorithms based on multiple metrics in most cases, and two hypothesis tests (McNemar’s and Wilcoxon tests) verify the statistical significance further. In addition, the proposed method has other preferred properties such as special advantages in dealing with highly imbalanced data, and it pioneers the researches on regularization for dynamic ensemble methods.  相似文献   

18.
目的 基于物理的烟雾模拟是计算机图形学的重要组成部分,渲染具有细小结构的高分辨率烟雾,需要大量的计算资源和高精度的数值求解方法。针对目前高精度湍流烟雾模拟速度慢,仿真困难的现状,提出了基于字典神经网络的方法,能够快速合成湍流烟雾,使得合成的结果增加细节的同时,保持高分辨率烟雾结果的重要结构信息。方法 使用高精度的数值仿真求解方法获得高分辨率和低分辨率的湍流烟雾数据,通过采集速度场局部块及相应的空间位置信息和时间特征生成数据集, 设计字典神经网络的网络架构,训练烟雾高频成分字典预测器,在GPU(graphic processing unit)上实现并行化,快速合成高分辨率的湍流烟雾结果。结果 实验表明,基于字典神经网络的方法能够在非常低分辨率的烟雾数据下合成空间和时间上连续的高分辨率湍流烟雾结果,效率比通过在GPU平台上直接仿真得到高分辨率湍流烟雾的结果快了一个数量级,且合成的烟雾结果与数值仿真方法得到的高分辨率湍流烟雾结果足够接近。结论 本文方法解决了烟雾的上采样问题,能够从非常低分辨率的烟雾仿真结果,通过设计基于字典神经网络结构以及特征描述符编码烟雾速度场的局部和全局信息,快速合成高分辨率湍流烟雾结果,且保持高精度烟雾的细节,与数值仿真方法的对比表明了本文方法的有效性。  相似文献   

19.
We develop a supervised dimensionality reduction method, called Lorentzian discriminant projection (LDP), for feature extraction and classification. Our method represents the structures of sample data by a manifold, which is furnished with a Lorentzian metric tensor. Different from classic discriminant analysis techniques, LDP uses distances from points to their within-class neighbors and global geometric centroid to model a new manifold to detect the intrinsic local and global geometric structures of data set. In this way, both the geometry of a group of classes and global data structures can be learnt from the Lorentzian metric tensor. Thus discriminant analysis in the original sample space reduces to metric learning on a Lorentzian manifold. We also establish the kernel, tensor and regularization extensions of LDP in this paper. The experimental results on benchmark databases demonstrate the effectiveness of our proposed method and the corresponding extensions.  相似文献   

20.
Clustering is an old research topic in data mining and machine learning. Most of the traditional clustering methods can be categorized as local or global ones. In this paper, a novel clustering method that can explore both the local and global information in the data set is proposed. The method, Clustering with Local and Global Regularization (CLGR), aims to minimize a cost function that properly trades off the local and global costs. We show that such an optimization problem can be solved by the eigenvalue decomposition of a sparse symmetric matrix, which can be done efficiently using iterative methods. Finally, the experimental results on several data sets are presented to show the effectiveness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号