Title
Variance reduction on general adaptive stochastic mirror descent.
Abstract
In this work, we propose a simple and generalized algorithmic framework for applying variance reduction to adaptive mirror descent algorithms for faster convergence. We introduce the SVRAMD algorithm, and provide its general convergence analysis in both the nonsmooth nonconvex optimization problem and the generalized P–L conditioned nonconvex optimization problem. We prove that variance reduction can reduce the gradient complexity of all adaptive mirror descent algorithms that satisfy a mild assumption and thus accelerate their convergence. In particular, our general theory implies that variance reduction can be applied to different algorithms with their distinct choices of the proximal function, such as gradient descent with time-varying step sizes, mirror descent with \(L_1\) mirror maps, and self-adaptive algorithms such as AdaGrad and RMSProp. Moreover, the proved convergence rates of SVRAMD recover the existing rates without complicated algorithmic components, which indicates their optimality. Extensive experiments validate our theoretical findings.
Year
DOI
Venue
2022
10.1007/s10994-022-06227-3
Machine Learning
Keywords
DocType
Volume
Variance reduction,Adaptive mirror descent,Nonconvex nonsmooth optimization,General framework,Convergence analysis
Journal
111
Issue
ISSN
Citations 
12
0885-6125
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Wenjie Li100.34
Zhanyu Wang2205.01
YiChen Zhang32610.72
Guang Cheng400.34