首页 | 本学科首页   官方微博 | 高级检索  
     

稀疏学习优化问题的求解综述
引用本文:陶卿,高乾坤,姜纪远,储德军. 稀疏学习优化问题的求解综述[J]. 软件学报, 2013, 24(11): 2498-2507
作者姓名:陶卿  高乾坤  姜纪远  储德军
作者单位:中国人民解放军陆军军官学院11系, 安徽 合肥 230031;中国人民解放军陆军军官学院11系, 安徽 合肥 230031;中国人民解放军陆军军官学院11系, 安徽 合肥 230031;中国人民解放军陆军军官学院11系, 安徽 合肥 230031
基金项目:国家自然科学基金(60975040,61273296);安徽省自然科学基金(1308085QF121)
摘    要:机器学习正面临着数据规模日益扩大的严峻挑战,如何处理大规模甚至超大规模数据问题,是当前统计学习亟需解决的关键性科学问题.大规模机器学习问题的训练样本集合往往具有冗余和稀疏的特点,机器学习优化问题中的正则化项和损失函数也蕴含着特殊的结构含义,直接使用整个目标函数梯度的批处理黑箱方法不仅难以处理大规模问题,而且无法满足机器学习对结构的要求.目前,依靠机器学习自身特点驱动而迅速发展起来的坐标优化、在线和随机优化方法成为解决大规模问题的有效手段.针对L1 正则化问题,介绍了这些大规模算法的一些研究进展.

关 键 词:L1 正则化  在线优化  随机优化  坐标优化
收稿时间:2013-04-30
修稿时间:2013-08-02

Survey of Solving the Optimization Problems for Sparse Learning
TAO Qing,GAO Qian-Kun,JIANG Ji-Yuan and CHU De-Jun. Survey of Solving the Optimization Problems for Sparse Learning[J]. Journal of Software, 2013, 24(11): 2498-2507
Authors:TAO Qing  GAO Qian-Kun  JIANG Ji-Yuan  CHU De-Jun
Affiliation:11th Department, Army Officer Academy of PLA, Hefei 230031, China;11th Department, Army Officer Academy of PLA, Hefei 230031, China;11th Department, Army Officer Academy of PLA, Hefei 230031, China;11th Department, Army Officer Academy of PLA, Hefei 230031, China
Abstract:Machine learning is facing a great challenge arising from the increasing scale of data. How to cope with the large-scale even huge-scale data is a key problem in the emerging area of statistical learning. Usually, there exist redundancy and sparsity in the training set of large-scale learning problems, and there are structural implications in the regularizer and loss function of a learning problem. If the gradient-type black-box methods are employed directly in batch settings, not only the large-scale problems cannot be solved but also the structural information implied by the machine learning cannot be exploited. Recently, the state-of-the-art scalable methods such as coordinate descent, online and stochastic algorithms, which are driven by the characteristics of machine learning, have become the dominant paradigms for large-scale problems. This paper focuses on L1-regularized problems and reviews some significant advances of these scalable algorithms.
Keywords:L1-regularization  online optimization  stochastic optimization  coordinate optimization
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号