首页 | 本学科首页   官方微博 | 高级检索  
     


Convex multi-task feature learning
Authors:Andreas Argyriou  Theodoros Evgeniou  Massimiliano Pontil
Affiliation:(1) Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, UK;(2) Technology Management and Decision Sciences, INSEAD, 77300 Fontainebleau, France
Abstract:We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the well-known single-task 1-norm regularization. It is based on a novel non-convex regularizer which controls the number of learned features common across the tasks. We prove that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task-specific functions and in the latter step it learns common-across-tasks sparse representations for these functions. We also provide an extension of the algorithm which learns sparse nonlinear representations using kernels. We report experiments on simulated and real data sets which demonstrate that the proposed method can both improve the performance relative to learning each task independently and lead to a few learned features common across related tasks. Our algorithm can also be used, as a special case, to simply select—not learn—a few common variables across the tasks. Editors: Daniel Silver, Kristin Bennett, Richard Caruana. This is a longer version of the conference paper (Argyriou et al. in Advances in neural information processing systems, vol. 19, 2007a). It includes new theoretical and experimental results.
Keywords:Collaborative filtering  Inductive transfer  Kernels  Multi-task learning  Regularization  Transfer learning  Vector-valued functions
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号