Title
Global Convergence Rate Of Incremental Aggregated Gradient Methods For Nonsmooth Problems
Abstract
We analyze the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions f (x) = Sigma(m)(i=1) fi (x) and a convex function r (x). Such composite optimization problems arise in a number of machine learning applications including regularized regression problems and constrained distributed optimization problems over sensor networks. Our method computes an approximate gradient for the function f (x) by aggregating the component gradients evaluated at outdated iterates over a finite window K and uses a proximal operator with respect to the regularization function r (x) at the intermediate iterate obtained by moving along the approximate gradient. Under the assumptions that f (x) is strongly convex and each f(i) (x) is smooth with Lipschitz gradients, we show the first linear convergence rate result for the PIAG method and provide explicit convergence rate estimates that highlight the dependence on the condition number of the problem and the size of the window K over which outdated component gradients are evaluated.
Year
Venue
Field
2016
2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC)
Gradient method,Mathematical optimization,Condition number,Proximal Gradient Methods,Convex function,Lipschitz continuity,Proximal gradient methods for learning,Rate of convergence,Optimization problem,Mathematics
DocType
ISSN
Citations 
Conference
0743-1546
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
N. Denizcan Vanli1368.13
Mert Gürbüzbalaban25512.36
Asuman E. Ozdaglar33228200.27