Title
Polynomial Escape-Time from Saddle Points in Distributed Non-Convex Optimization
Abstract
The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In this work we establish that agents cluster around a network centroid in the mean-fourth sense and proceeded to study the dynamics of this point. We establish expected descent in non-convex environments in the large-gradient regime and introduce a short-term model to examine the dynamics over finite-time horizons. Using this model, we establish that the diffusion strategy is able to escape from strict saddle-points in O(1/μ) iterations, where μ denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.
Year
DOI
Venue
2019
10.1109/CAMSAP45676.2019.9022458
2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
Keywords
DocType
ISBN
Stochastic optimization,adaptation,nonconvex costs,saddle point,escape time,gradient noise,stationary points,distributed optimization,diffusion learning
Conference
978-1-7281-5550-0
Citations 
PageRank 
References 
0
0.34
15
Authors
2
Name
Order
Citations
PageRank
Stefan Vlaski12311.39
Ali H. Sayed29134667.71