首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A machine learning framework which uses unlabeled data from a related task domain in supervised classification tasks is described. The unlabeled data come from related domains, which share the same class labels or generative distribution as the labeled data. Patterns in the unlabeled data are learned via a neural network and transferred to the target domain from where the labeled data are generated, so as to improve the performance of the supervised learning task. We call this approach self-taught transfer learning from unlabeled data. We introduce a general-purpose feature learning algorithm producing features that retain information from the unlabeled data. Information preservation assures that the features obtained will be useful for improving the classification performance of the supervised tasks.  相似文献   

2.
基于分歧的半监督学习   总被引:9,自引:0,他引:9  
周志华 《自动化学报》2013,39(11):1871-1878
传统监督学习通常需使用大量有标记的数据样本作为训练例,而在很多现实问题中,人们虽能容易地获得大批数据样本,但为数据 提供标记却需耗费很多人力物力.那么,在仅有少量有标记数据时,可否通过对大量未标记数据进行利用来提升学习性能呢?为此,半监督学习 成为近十多年来机器学习的一大研究热点.基于分歧的半监督学习是该领域的主流范型之一,它通过使用多个学习器来对未标记数据进行利用, 而学习器间的"分歧"对学习成效至关重要.本文将综述简介这方面的一些研究进展.  相似文献   

3.
半监督局部维数约减   总被引:1,自引:1,他引:0       下载免费PDF全文
在挖掘和分析高维数据任务中,有时只能获得有限的成对约束信息(must-link约束和cannot-link约束),由于缺乏数据类标号信息,监督维数约减方法常常不能得到满意的结果。在这种情况下,使用大量的无标号样本可以提高算法的性能。文中借助于成对约束信息和大量无标号样本,提出半监督局部维数约减方法(SLDR)。SLDR集成数据的局部信息和成对约束寻找一个最优投影,当数据被投影到低维空间时,不仅cannot-link约束中样本点对之间距离更远、must-link约束中样本点对之间距离更近,数据的内在几何信息还被保持。而且SLDR能推广为非线性方法,使之能够适应非线性数据的维数约减。在各种数据集上的实验结果充分验证了所提出算法的有效性。  相似文献   

4.
耿传兴  谭正豪  陈松灿 《软件学报》2023,34(4):1870-1878
借助预置任务创建的免费监督信号/标记,自监督学习(SSL)能学得无标记数据的有效表示,并已在多种下游任务中获得了验证.现有预置任务通常先对原视图数据作显式的线性或非线性变换,由此形成了多个增广视图数据,然后通过预测上述视图或变换的对应标记或最大化视图间的一致性达成学习表示.发现这种自监督增广(即数据自身与自监督标记的增广)不仅有益无监督预置任务而且也有益监督分类任务的学习,而当前鲜有工作对此关注,它们要么将预置任务作为下游分类任务的学习辅助,采用多任务学习建模;要么采用多标记学习,联合建模下游任务标记与自监督标记.然而,下游任务与预置任务间往往存在固有差异(语义,任务难度等),由此不可避免地造成二者学习间的竞争,给下游任务的学习带来风险.为挑战该问题,提出一种简单但有效的自监督多视图学习框架(SSL-MV),通过在增广数据视图上执行与下游任务相同的学习来避免自监督标记对下游标记学习的干扰.更有意思的是,借助多视图学习,设计的框架自然拥有了集成推断能力,因而显著提升了下游分类任务的学习性能.最后,基于基准数据集的广泛实验验证了SSL-MV的有效性.  相似文献   

5.
Context plays an important role when performing classification, and in this paper we examine context from two perspectives. First, the classification of items within a single task is placed within the context of distinct concurrent or previous classification tasks (multiple distinct data collections). This is referred to as multi-task learning (MTL), and is implemented here in a statistical manner, using a simplified form of the Dirichlet process. In addition, when performing many classification tasks one has simultaneous access to all unlabeled data that must be classified, and therefore there is an opportunity to place the classification of any one feature vector within the context of all unlabeled feature vectors; this is referred to as semi-supervised learning. In this paper we integrate MTL and semi-supervised learning into a single framework, thereby exploiting two forms of contextual information. Example results are presented on a "toy" example, to demonstrate the concept, and the algorithm is also applied to three real data sets.  相似文献   

6.
半监督维数约简是指借助于辅助信息与大量无标记样本信息从高维数据空间找到一个最优低维判别空间,便于后续的分类或聚类操作,它被看作是理解基因序列、文本与人脸图像等高维数据的有效方法。提出一个基于成对约束的半监督维数约简一般框架(SSPC)。该方法首先通过使用成对约束和无标号样本的内在几何结构学习一个判别邻接矩阵;其次,新方法应用学到的投影将原来高维空间中的数据映射到低维空间中,以至于聚类内的样本之间距离变得更加紧凑,而不同聚类间的样本之间距离变得尽可能得远。所提出的算法不仅能找到一个最佳的线性判别子空间,还可以揭示流形数据的非线性结构。在一些真实数据集上的实验结果表明,新方法的性能优于当前主流基于成对约束的维数约简算法的性能。  相似文献   

7.
Algorithms on streaming data have attracted increasing attention in the past decade. Among them, dimensionality reduction algorithms are greatly interesting due to the desirability of real tasks. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely used dimensionality reduction approaches. However, PCA is not optimal for general classification problems because it is unsupervised and ignores valuable label information for classification. On the other hand, the performance of LDA is degraded when encountering limited available low-dimensional spaces and singularity problem. Recently, Maximum Margin Criterion (MMC) was proposed to overcome the shortcomings of PCA and LDA. Nevertheless, the original MMC algorithm could not satisfy the streaming data model to handle large-scale high-dimensional data set. Thus an effective, efficient and scalable approach is needed. In this paper, we propose a supervised incremental dimensionality reduction algorithm and its extension to infer adaptive low-dimensional spaces by optimizing the maximum margin criterion. Experimental results on a synthetic dataset and real datasets demonstrate the superior performance of our proposed algorithm on streaming data.  相似文献   

8.
The problem of learning from both labeled and unlabeled data is considered. In this paper, we present a novel semisupervised multimodal dimensionality reduction (SSMDR) algorithm for feature reduction and extraction. SSMDR can preserve the local and multimodal structures of labeled and unlabeled samples. As a result, data pairs in the close vicinity of the original space are projected in the nearby of the embedding space. Due to overfitting, supervised dimensionality reduction methods tend to perform inefficiently when only few labeled samples are available. In such cases, unlabeled samples play a significant role in boosting the learning performance. The proposed discriminant technique has an analytical form of the embedding transformations that can be effectively obtained by applying the eigen decomposition, or finding two close optimal sets of transforming basis vectors. By employing the standard kernel trick, SSMDR can be extended to the nonlinear dimensionality reduction scenarios. We verify the feasibility and effectiveness of SSMDR through conducting extensive simulations including data visualization and classification on the synthetic and real‐world datasets. Our obtained results reveal that SSMDR offers significant advantages over some widely used techniques. Compared with other methods, the proposed SSMDR exhibits superior performance on multimodal cases.  相似文献   

9.
We consider the problem of hierarchical or multitask modeling where we simultaneously learn the regression function and the underlying geometry and dependence between variables. We demonstrate how the gradients of the multiple related regression functions over the tasks allow for dimension reduction and inference of dependencies across tasks jointly and for each task individually. We provide Tikhonov regularization algorithms for both classification and regression that are efficient and robust for high-dimensional data, and a mechanism for incorporating a priori knowledge of task (dis)similarity into this framework. The utility of this method is illustrated on simulated and real data.  相似文献   

10.
A novel transfer learning method is proposed in this paper to solve the power load forecast problems in the smart grid. Prediction errors of the target tasks can be greatly reduced by utilizing the knowledge transferred from the source tasks. In this work, a source task selection algorithm is developed and the transfer learning model based on Gaussian process is constructed. Negative knowledge transfers are avoided compared with the previous works, and therefore the prediction accuracies are greatly improved. In addition, a fast inference algorithm is developed to accelerate the prediction steps. The results of the experiments with real world data are illustrated.  相似文献   

11.
As a recently proposed machine learning method, active learning of Gaussian processes can effectively use a small number of labeled examples to train a classifier, which in turn is used to select the most informative examples from unlabeled data for manual labeling. However, in the process of example selection, active learning usually need consider all the unlabeled data without exploiting the structural space connectivity among them. This will decrease the classification accuracy to some extent since the selected points may not be the most informative. To overcome this shortcoming, in this paper, we present a method which applies the manifold-preserving graph reduction (MPGR) algorithm to the traditional active learning method of Gaussian processes. MPGR is a simple and efficient example sparsification algorithm which can construct a subset to represent the global structure and simultaneously eliminate the influence of noisy points and outliers. Thereby, when actively selecting examples to label, we just choose from the subset constructed by MPGR instead of the whole unlabeled data. We report experimental results on multiple data sets which demonstrate that our method obtains better classification performance compared with the original active learning method of Gaussian processes.  相似文献   

12.
13.
In this paper, a novel unsupervised dimensionality reduction algorithm, unsupervised Globality-Locality Preserving Projections in Transfer Learning (UGLPTL) is proposed, based on the conventional Globality-Locality Preserving dimensionality reduction algorithm (GLPP) that does not work well in real-world Transfer Learning (TL) applications. In TL applications, one application (source domain) contains sufficient labeled data, but the related application contains only unlabeled data (target domain). Compared to the existing TL methods, our proposed method incorporates all the objectives, such as minimizing the marginal and conditional distributions between both the domains, maximizing the variance of the target domain, and performing Geometrical Diffusion on Manifolds, all of which are essential for transfer learning applications. UGLPTL seeks a projection vector that projects the source and the target domains data into a common subspace where both the labeled source data and the unlabeled target data can be utilized to perform dimensionality reduction. Comprehensive experiments have verified that the proposed method outperforms many state-of-the-art non-transfer learning and transfer learning methods on two popular real-world cross-domain visual transfer learning data sets. Our proposed UGLPTL approach achieved 82.18% and 87.14% mean accuracies over all the tasks of PIE Face and Office-Caltech data sets, respectively.  相似文献   

14.

在许多实际的数据挖掘应用场景,如网络入侵检测、Twitter垃圾邮件检测、计算机辅助诊断等中,与目标域分布不同但相关的源域普遍存在. 一般情况下,在源域和目标域中都有大量未标记样本,对其中的每个样本都进行标记是件困难的、昂贵的、耗时的事,有时也没必要. 因此,充分挖掘源域和目标域中标记和未标记样本来解决目标域中的分类任务非常重要且有意义. 结合归纳迁移学习和半监督学习,提出一种名为Co-Transfer的半监督归纳迁移学习框架. Co-Transfer首先生成3个TrAdaBoost分类器用于实现从原始源域到原始目标域的迁移学习,同时生成另外3个TrAdaBoost分类器用于实现从原始目标域到原始源域的迁移学习. 这2组分类器都使用从原始源域和原始目标域的原有标记样本的有放回抽样来训练. 在Co-Transfer的每一轮迭代中,每组TrAdaBoost分类器使用新的训练集更新,其中一部分训练样本是原有的标记样本,一部分是由本组TrAdaBoost分类器标记的样本,还有一部分则由另一组TrAdaBoost分类器标记. 迭代终止后,把从原始源域到原始目标域的3个TrAdaBoost分类器的集成作为原始目标域分类器. 在UCI数据集和文本分类数据集上的实验结果表明,Co-Transfer可以有效地学习源域和目标域的标记和未标记样本从而提升泛化性能.

  相似文献   

15.
Principal component analysis (PCA) is one of the most widely used unsupervised dimensionality reduction methods in pattern recognition. It preserves the global covariance structure of data when labels of data are not available. However, in many practical applications, besides the large amount of unlabeled data, it is also possible to obtain partial supervision such as a few labeled data and pairwise constraints, which contain much more valuable information for discrimination than unlabeled data. Unfortunately, PCA cannot utilize that useful discriminant information effectively. On the other hand, traditional supervised dimensionality reduction methods such as linear discriminant analysis perform on only labeled data. When labeled data are insufficient, their performances will deteriorate. In this paper, we propose a novel discriminant PCA (DPCA) model to boost the discriminant power of PCA when both unlabeled and labeled data as well as pairwise constraints are available. The derived DPCA algorithm is efficient and has a closed form solution. Experimental results on several UCI and face data sets show that DPCA is superior to several established dimensionality reduction methods.  相似文献   

16.
实际应用中的许多数据,如图像,视频,通常具有张量性和高维性特征,张量数据的维数约简便成为近期的研究热点。现有的张量维数约简方法大都是监督的,它们不能有效利用未标签样本数据的信息。基于调和函数的张量数据维数约简方法综合了传统半监督方法和张量方法的优点,能够在有效利用未标签样本信息的同时,保持数据天然的张量结构特征。仿真实验和真实数据上的结果都验证了其有效性。  相似文献   

17.
Learning to rank, a task to learn ranking functions to sort a set of entities using machine learning techniques, has recently attracted much interest in information retrieval and machine learning research. However, most of the existing work conducts a supervised learning fashion. In this paper, we propose a transductive method which extracts paired preference information from the unlabeled test data. Then we design a loss function to incorporate this preference data with the labeled training data, and learn ranking functions by optimizing the loss function via a derived Ranking SVM framework. The experimental results on the LETOR 2.0 benchmark data collections show that our transductive method can significantly outperform the state-of-the-art supervised baseline.  相似文献   

18.
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. However, the real world has more general scenarios in which each task has only a small number of training samples and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Extensive experimental results demonstrate that our proposed method is significantly competitive compared to state-of-the-art methods.  相似文献   

19.
半监督降维(Semi\|Supervised Dimensionality Reduction,SSDR)框架下,基于成对约束提出一种半监督降维算法SCSSDR。利用成对样本进行构图,在保持局部结构的同时顾及数据的全局结构。通过最优化目标函数,使得同类样本更加紧凑\,异类样本更加离散。采用UCI数据集对算法进行定量分析,发现该方法优于PCA及传统流形学习算法,进一步的UCI数据集和高光谱数据集分类实验表明:该方法适合于进行分类目的特征提取。  相似文献   

20.
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and, conversely, the low-dimensional space allows dynamics to be learned efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. A divide, conquer, and coordinate method is proposed. The solution approximates the nonlinear manifold and dynamics using simple piecewise linear models. The interactions and coordinations among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of overfitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification, and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号