首页 | 本学科首页   官方微博 | 高级检索  
     


Feature-aware regularization for sparse online learning
Authors:Hidekazu Oiwa  Shin Matsushima  Hiroshi Nakagawa
Affiliation:1. Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, 113-8654, Japan
2. Information Technology Center, The University of Tokyo, Tokyo, 113-8654, Japan
Abstract:Learning a compact predictive model in an online setting has recently gained a great deal of attention. The combination of online learning with sparsity-inducing regularization enables faster learning with a smaller memory space than the previous learning frameworks. Many optimization methods and learning algorithms have been developed on the basis of online learning with L 1-regularization. L 1-regularization tends to truncate some types of parameters, such as those that rarely occur or have a small range of values, unless they are emphasized in advance. However, the inclusion of a pre-processing step would make it very difficult to preserve the advantages of online learning. We propose a new regularization framework for sparse online learning. We focus on regularization terms, and we enhance the state-of-the-art regularization approach by integrating information on all previous subgradients of the loss function into a regularization term. The resulting algorithms enable online learning to adjust the intensity of each feature’s truncations without pre-processing and eventually eliminate the bias of L 1-regularization. We show theoretical properties of our framework, the computational complexity and upper bound of regret. Experiments demonstrated that our algorithms outperformed previous methods in many classification tasks.
Keywords:online learning  supervised learning  sparsity-inducing regularization  feature selection  sentimentanalysis
本文献已被 CNKI 维普 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号