Title
On Lower and Upper Bounds in Smooth Strongly Convex Optimization - A Unified Approach via Linear Iterative Methods.
Abstract
In this thesis we develop a novel framework to study smooth and strongly convex optimization algorithms, both deterministic and stochastic. Focusing on quadratic functions we are able to examine optimization algorithms as a recursive application of linear operators. This, in turn, reveals a powerful connection between a class of optimization algorithms and the analytic theory of polynomials whereby new lower and upper bounds are derived. In particular, we present a new and natural derivation of Nesterov's well-known Accelerated Gradient Descent method by employing simple 'economic' polynomials. This rather natural interpretation of AGD contrasts with earlier ones which lacked a simple, yet solid, motivation. Lastly, whereas existing lower bounds are only valid when the dimensionality scales with the number of iterations, our lower bound holds in the natural regime where the dimensionality is fixed.
Year
Venue
Field
2014
CoRR
Discrete mathematics,Mathematical optimization,Gradient descent,Polynomial,Iterative method,Upper and lower bounds,Curse of dimensionality,Convex function,Quadratic function,Random coordinate descent,Mathematics
DocType
Volume
Citations 
Journal
abs/1410.6387
1
PageRank 
References 
Authors
0.45
6
1
Name
Order
Citations
PageRank
Yossi Arjevani1345.55