Title
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems.
Abstract
We consider nonconvex-concave minimax problems, $\min_{x} \max_{y\in\mathcal{Y}} f(x, y)$, where $f$ is nonconvex in $x$ but concave in $y$. The standard algorithm for solving this problem is the celebrated gradient descent ascent (GDA) algorithm, which has been widely used in machine learning, control theory and economics. However, despite the solid theory for the convex-concave setting, GDA can converge to limit cycles or even diverge in a general setting. In this paper, we present a nonasymptotic analysis of GDA for solving nonconvex-concave minimax problems, showing that GDA can find a stationary point of the function $\Phi(\cdot) :=\max_{y\in\mathcal{Y} }f(\cdot, y)$ efficiently. To the best our knowledge, this is the first theoretical guarantee for GDA in this setting, shedding light on its practical performance in many real applications.
Year
Venue
DocType
2019
ICML 2020
Journal
Volume
Citations 
PageRank 
abs/1906.00331
3
0.37
References 
Authors
0
3
Name
Order
Citations
PageRank
Tianyi Lin114711.79
Chi Jin238823.70
Michael I. Jordan3312203640.80