首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Multitask Learning   总被引:10,自引:0,他引:10  
Caruana  Rich 《Machine Learning》1997,28(1):41-75
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.  相似文献   

2.
图像识别是图像研究领域的核心问题,解决图像识别问题对人脸识别、自动驾驶、机器人等各领域研究都有重要意义.目前广泛使用的基于深度神经网络的机器学习方法,已经在鸟类分类、人脸识别、日常物品分类等图像识别数据集上达到了超过人类的水平,同时越来越多的工业界应用开始考虑基于深度神经网络的方法,以完成一系列图像识别业务.但是深度学...  相似文献   

3.
The approach of learning multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks. We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach. Editors: Daniel Silver, Kristin Bennett, Richard Caruana. A preliminary version of this paper appears in the proceedings of COLT’03, (Ben-David and Schuller 2003).  相似文献   

4.
随着信息技术的快速发展,信息网络无处不在,例如社交网络、学术网络、万维网等.由于网络规模不断扩大以及数据的稀疏性,信息网络的分析方法面临巨大挑战.作为应对网络规模及数据稀疏挑战的有效方法,信息网络表征学习旨在利用网络的拓扑结构、节点内容等信息将节点嵌入到低维的向量空间中,同时保留原始网络固有的结构特征和内容特征,从而使...  相似文献   

5.
In real applications of inductive learning for classifi cation, labeled instances are often defi cient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classifi cation accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a signifi cant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks;otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.  相似文献   

6.
使用深度强化学习解决单智能体任务已经取得了突破性的进展。由于多智能体系统的复杂性,普通算法无法解决其主要难点。同时,由于智能体数量增加,将最大化单个智能体的累积回报的期望值作为学习目标往往无法收敛,某些特殊的收敛点也不满足策略的合理性。对于不存在最优解的实际问题,强化学习算法更是束手无策,将博弈理论引入强化学习可以很好地解决智能体的相互关系,可以解释收敛点对应策略的合理性,更重要的是可以用均衡解来替代最优解以求得相对有效的策略。因此,从博弈论的角度梳理近年来出现的强化学习算法,总结当前博弈强化学习算法的重难点,并给出可能解决上述重难点的几个突破方向。  相似文献   

7.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

8.
目的 现有的图像识别方法应用于从同一分布中提取的训练数据和测试数据时具有良好性能,但这些方法在实际场景中并不适用,从而导致识别精度降低。使用领域自适应方法是解决此类问题的有效途径,领域自适应方法旨在解决来自两个领域相关但分布不同的数据问题。方法 通过对数据分布的分析,提出一种基于注意力迁移的联合平衡自适应方法,将源域有标签数据中提取的图像特征迁移至无标签的目标域。首先,使用注意力迁移机制将有标签源域数据的空间类别信息迁移至无标签的目标域。通过定义卷积神经网络的注意力,使用关注信息来提高图像识别精度。其次,基于目标数据集引入网络参数的先验分布,并且赋予网络自动调整每个领域对齐层特征对齐的能力。最后,通过跨域偏差来描述特定领域的特征对齐层的输入分布,定量地表示每层学习到的领域适应性程度。结果 该方法在数据集Office-31上平均识别准确率为77.6%,在数据集Office-Caltech上平均识别准确率为90.7%,不仅大幅领先于传统手工特征方法,而且取得了与目前最优的方法相当的识别性能。结论 注意力迁移的联合平衡领域自适应方法不仅可以获得较高的识别精度,而且能够自动学习领域间特征的对齐程度,同时也验证了进行域间特征迁移可以提高网络优化效果这一结论。  相似文献   

9.
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. However, the real world has more general scenarios in which each task has only a small number of training samples and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Extensive experimental results demonstrate that our proposed method is significantly competitive compared to state-of-the-art methods.  相似文献   

10.
在生产实际中,一个新的任务通常和已有任务存在一定的联系。迁移学习方法可以将已有数据集中的有用信息,迁移到新的任务,以减少重新建模过程中大量的时间和费用消耗。然而,由于任务之间的分布差异,在异构环境下如何避免负面迁移问题,仍未得到有效的解决。除了要衡量数据间的相似性,还需要衡量实例间的相关性,而大多数传统方法仅在一个层面进行操作。提出了基于压缩编码的迁移学习方法(TLCC),建立了两个层面的算法模型,具体来说,在数据层面,数据间的相似性可以表示为超平面分类器的编码长度,而在实例层面,通过进一步挑选出有价值的实例进行迁移,提升算法性能,避免负面迁移的发生。实验结果表明,提出的算法相比其他算法具有明显的优势,在噪声环境下也有较高的准确度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号