Abstract | ||
---|---|---|
In this work, we present theoretical results on the convergence of non-convex accelerated gradient descent matrix factorization models. The technique is applied to matrix sensing problems with squared loss, for the estimation of a rank $r$ optimal solution $X^star in mathbb{R}^{n times n}$. We show that the acceleration leads to linear convergence rate, even under non-convex settings where the variable $X$ is represented as $U U^top$ for $U in mathbb{R}^{n times r}$. Our result has the same dependence on the condition number of the objective --and the optimal solution-- as that of the recent results on non-accelerated algorithms. However, acceleration is observed practice, both synthetic examples and two real applications: neuronal multi-unit activities recovery from single electrode recordings, and quantum state tomography on quantum computing simulators. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Learning | Convergence (routing),Mathematical optimization,Gradient descent,Condition number,Square (algebra),Matrix (mathematics),Mathematical analysis,Matrix decomposition,Quantum tomography,Rate of convergence,Mathematics |
DocType | Volume | Citations |
Journal | abs/1806.00534 | 0 |
PageRank | References | Authors |
0.34 | 30 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Anastasios T. Kyrillidis | 1 | 185 | 17.11 |
shashanka ubaru | 2 | 58 | 8.97 |
Georgios Kollias | 3 | 4 | 2.12 |
Kristofer E Bouchard | 4 | 18 | 8.99 |