首页 | 本学科首页   官方微博 | 高级检索  
     


On stochastic gradient and subgradient methods with adaptive steplength sequences
Authors:Farzad Yousefian  Angelia Nedić  Uday V. Shanbhag
Affiliation:1. College of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China;2. Centre for Optical and Electromagnetic Research (COER), Zhejiang University, Hangzhou 310058, China;1. Department of Electrical Engineering and Information Technology, University of Naples Federico II, Italy;2. Control and Power Group, Electrical and Electronic Engineering Department, Imperial College, London, United Kingdom;3. Dip. di Ingegneria dell’Informazione, University of Florence, Italy
Abstract:Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号