Title
Analytical convergence regions of accelerated gradient descent in nonconvex optimization under Regularity Condition.
Abstract
There is a growing interest in using robust control theory to analyze and design optimization and machine learning algorithms. This paper studies a class of nonconvex optimization problems whose cost functions satisfy the so-called Regularity Condition (RC). Empirical studies show that accelerated gradient descent (AGD) algorithms (e.g. Nesterov’s acceleration and Heavy-ball) with proper initializations often work well in practice. However, the convergence of such AGD algorithms is largely unknown in the literature. The main contribution of this paper is the analytical characterization of the convergence regions of AGD under RC via robust control tools. Since such optimization problems arise frequently in many applications such as phase retrieval, training of neural networks and matrix sensing, our result shows promise of robust control theory in these areas.
Year
DOI
Venue
2020
10.1016/j.automatica.2019.108715
Automatica
Keywords
Field
DocType
Nonconvex optimization,Regularity condition,Accelerated gradient descent,Robust control
Convergence (routing),Mathematical optimization,Gradient descent,Phase retrieval,Matrix (mathematics),Acceleration,Artificial neural network,Robust control,Optimization problem,Mathematics
Journal
Volume
Issue
ISSN
113
1
0005-1098
Citations 
PageRank 
References 
1
0.35
0
Authors
4
Name
Order
Citations
PageRank
Huaqing Xiong111.37
Yuejie Chi272056.67
Bin Hu310.35
Wei Zhang423633.77