Abstract | ||
---|---|---|
We study stochastic convex optimization subjected to linear equality constraints. Traditional Stochastic Alternating Direction Method of Multipliers [1] and its Nesterov's acceleration scheme [2] can only achieve ergodic O(1/ root K) convergence rates, where K is the number of iteration. By introducing Variance Reduction (VR) techniques, the convergence rates improve to ergodic O(1/ root K) [3, 4]. In this paper, we propose a new stochastic ADMM which elaborately integrates Nesterov's extrapolation and VR techniques. With Nesterov's extrapolation, our algorithm can achieve a non-ergodic O(1/K) convergence rate which is optimal for separable linearly constrained non-smooth convex problems, while the convergence rates of VR based ADMM methods are actually tight O(1/ root K) in non-ergodic sense. To the best of our knowledge, this is the first work that achieves a truly accelerated, stochastic convergence rate for constrained convex problems. The experimental results demonstrate that our algorithm is faster than the existing state-of-the-art stochastic ADMM methods. |
Year | Venue | Field |
---|---|---|
2017 | ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017) | Convergence (routing),Discrete mathematics,Mathematical optimization,Ergodic theory,Separable space,Regular polygon,Extrapolation,Rate of convergence,Variance reduction,Convex optimization,Mathematics |
DocType | Volume | ISSN |
Conference | 30 | 1049-5258 |
Citations | PageRank | References |
0 | 0.34 | 14 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cong Fang | 1 | 17 | 7.14 |
Cheng, Feng | 2 | 0 | 0.34 |
Zhouchen Lin | 3 | 4805 | 203.69 |