首页 | 本学科首页   官方微博 | 高级检索  
     

机器学习随机优化方法的个体收敛性研究综述
引用本文:陶卿马坡张梦晗陶蔚.机器学习随机优化方法的个体收敛性研究综述[J].数据采集与处理,2017,32(1):17-25.
作者姓名:陶卿马坡张梦晗陶蔚
作者单位:1.中国人民解放军陆军军官学院十一系,合肥,230031; 2.解放军理工大学指挥信息系统学院,南京,210007
摘    要:随机优化方法是求解大规模机器学习问题的主流方法,其研究的焦点问题是算法是否达到最优收敛速率与能否保证学习问题的结构。目前,正则化损失函数问题已得到了众多形式的随机优化算法,但绝大多数只是对迭代进行 平均的输出方式讨论了收敛速率,甚至无法保证最为典型的稀疏结构。与之不同的是,个体解能很好保持稀疏性,其最优收敛速率已经作为open问题被广泛探索。另外,随机优化普遍采用的梯度无偏假设往往不成立,加速方法收敛界中的偏差在有偏情形下会随迭代累积,从而无法应用。本文对一阶随机梯度方法的研究现状及存在的问题进行综述,其中包括个体收敛速率、梯度有偏情形以及非凸优化问题,并在此基础上指出了一些值得研究的问题。

关 键 词:机器学习  随机优化  个体收敛性  有偏梯度估计  非凸问题

Individual Convergence of Stochastic Optimization Methods in Machine Learning
Tao Qing,Ma Po,Zhang Menghan,Tao Wei.Individual Convergence of Stochastic Optimization Methods in Machine Learning[J].Journal of Data Acquisition & Processing,2017,32(1):17-25.
Authors:Tao Qing  Ma Po  Zhang Menghan  Tao Wei
Affiliation:1.11st Department, Army Officer Academy of PLA, Hefei, 230031, China; 2.College of Command System, The PLA University of Science and Technology, Nanjing, 210007, China
Abstract:The stochastic optimization algorithm is one of the state-of-the-art methods for solving large scale machine learning problems, where the focus is on whether or not the optimal convergence rate is derived and the learning structure is ensured. So far, various kinds of stochastic optimization algorithms have been presented for solving the regularized loss problems. However, most of them only discuss the convergence in terms of the averaged output, and even the simplest sparsity cannot be preserved. In contrast to the averaged output, the individual solution can keep the sparsity very well, and its optimal convergence rate is extensively explored as an open problem. On the other hand, the commonly-used assumption about unbiased gradient in stochastic optimization often does not hold in practice. In such cases, an astonishing fact is that the bias in the convergence bound of accelerated algorithms will accumulate with the iteration, and this makes the accelerated algorithms inapplicable. In this paper, an overview of the state-of-the-art and existing problems about the stochastic first-order gradient methods is given, which includes the individual convergence rate, biased gradient and nonconvex problems. Based on it, some interesting problems for future research are indicated.
Keywords:machine learning  stochastic optimization  individual convergence  biased gradient estimation  non-convex problems
点击此处可从《数据采集与处理》浏览原始摘要信息
点击此处可从《数据采集与处理》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号