首页 | 本学科首页   官方微博 | 高级检索  
     

一种求解强凸优化问题的最优随机算法
引用本文:邵言剑,陶卿,姜纪远,周柏.一种求解强凸优化问题的最优随机算法[J].软件学报,2014,25(9):2160-2171.
作者姓名:邵言剑  陶卿  姜纪远  周柏
作者单位:中国人民解放军陆军军官学院 十一系, 安徽 合肥 230031;中国人民解放军陆军军官学院 十一系, 安徽 合肥 230031;中国人民解放军陆军军官学院 十一系, 安徽 合肥 230031;中国人民解放军陆军军官学院 十一系, 安徽 合肥 230031
基金项目:国家自然科学基金(61273296)
摘    要:随机梯度下降(SGD)算法是处理大规模数据的有效方法之一.黑箱方法SGD在强凸条件下能达到最优的O(1/T)收敛速率,但对于求解L1+L2正则化学习问题的结构优化算法,如COMID(composite objective mirror descent)仅具有O(lnT/T)的收敛速率.提出一种能够保证稀疏性基于COMID的加权算法,证明了其不仅具有O(1/T)的收敛速率,还具有on-the-fly计算的优点,从而减少了计算代价.实验结果表明了理论分析的正确性和所提算法的有效性.

关 键 词:机器学习  随机优化  强凸问题  混合正则化项  COMID  (composite  objective  mirror  descent)
收稿时间:2014/1/23 0:00:00
修稿时间:4/9/2014 12:00:00 AM

Stochastic Algorithm with Optimal Convergence Rate for Strongly Convex Optimization Problems
SHAO Yan-Jian,TAO Qing,JIANG Ji-Yuan and ZHOU Bai.Stochastic Algorithm with Optimal Convergence Rate for Strongly Convex Optimization Problems[J].Journal of Software,2014,25(9):2160-2171.
Authors:SHAO Yan-Jian  TAO Qing  JIANG Ji-Yuan and ZHOU Bai
Affiliation:11st Department, Army Officer Academy of PLA, Hefei 230031, China;11st Department, Army Officer Academy of PLA, Hefei 230031, China;11st Department, Army Officer Academy of PLA, Hefei 230031, China;11st Department, Army Officer Academy of PLA, Hefei 230031, China
Abstract:Stochastic gradient descent (SGD) is one of the efficient methods for dealing with large-scale data. Recent research shows that the black-box SGD method can reach an O(1/T) convergence rate for strongly-convex problems. However, for solving the regularized problem with L1 plus L2 terms, the convergence rate of the structural optimization method such as COMID (composite objective mirror descent) can only attain O(lnT/T). In this paper, a weighted algorithm based on COMID is presented, to keep the sparsity imposed by the L1 regularization term. A prove is provided to show that it achieves an O(1/T) convergence rate. Furthermore, the proposed scheme takes the advantage of computation on-the-fly so that the computational costs are reduced. The experimental results demonstrate the correctness of theoretic analysis and effectiveness of the proposed algorithm.
Keywords:machine learning  stochastic optimization  strongly-convex  hybrid regularization  COMID (composite objective mirror descent)
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号