Title
On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization.
Abstract
Adaptive gradient methods are workhorses in deep learning. However, the convergence guarantees of adaptive gradient methods for nonconvex optimization have not been sufficiently studied. In this paper, we provide a sharp analysis of a recently proposed adaptive gradient method namely partially adaptive momentum estimation method (Padam) (Chen and Gu, 2018), which admits many existing adaptive gradient methods such as AdaGrad, RMSProp and AMSGrad as special cases. Our analysis shows that, for smooth nonconvex functions, Padam converges to a first-order stationary point at the rate of $Obig((sum_{i=1}^d|mathbf{g}_{1:T,i}|_2)^{1/2}/T^{3/4} + d/Tbig)$, where $T$ is the number of iterations, $d$ is the dimension, $mathbf{g}_1,ldots,mathbf{g}_T$ are the stochastic gradients, and $mathbf{g}_{1:T,i} = [g_{1,i},g_{2,i},ldots,g_{T,i}]^top$. Our theoretical result also suggests that in order to achieve faster convergence rate, it is necessary to use Padam instead of AMSGrad. This is well-aligned with the empirical results of deep learning reported in Chen and Gu (2018).
Year
Venue
Field
2018
arXiv: Learning
Convergence (routing),Gradient method,Discrete mathematics,Mathematical optimization,Stationary point,Momentum,Rate of convergence,Mathematics
DocType
Volume
Citations 
Journal
abs/1808.05671
10
PageRank 
References 
Authors
0.53
15
5
Name
Order
Citations
PageRank
Dongruo Zhou1509.91
Yiqi Tang2101.20
Ziyan Yang3111.22
Yuan Cao4206.45
Quanquan Gu5111678.25