Title
Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization.
Abstract
Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. They have proved efficient in reducing the labor of tuning in practice, but many of them lack theoretic guarantees even in the convex setting. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
Year
Venue
Field
2019
arXiv: Learning
Online learning,Mathematical optimization,Non convex optimization,Computer science,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1901.09068
0
PageRank 
References 
Authors
0.34
9
3
Name
Order
Citations
PageRank
Zhenxun Zhuang100.34
Cutkosky, Ashok21410.02
Francesco Orabona388151.44