Title
A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning
Abstract
Minimizing a non-smooth function over the Grassmannian appears in many applications in machine learning. In this paper we show that if the objective satisfies a certain Riemannian regularity condition (RRC) with respect to some point in the Grassmannian, then a projected Riemannian subgradient method with appropriate initialization and geometrically diminishing step size converges at a linear rate to that point. We show that for both the robust subspace learning method Dual Principal Component Pursuit (DPCP) and the Orthogonal Dictionary Learning (ODL) problem, the RRC is satisfied with respect to appropriate points of interest, namely the subspace orthogonal to the sought subspace for DPCP and the orthonor-mal dictionary atoms for ODL. Consequently, we obtain in a unified framework significant improvements for the convergence theory of both methods.
Year
Venue
Keywords
2019
NeurIPS
dictionary learning
Field
DocType
Citations 
Mathematical optimization,Non convex optimization,Dictionary learning,Algebra,Subspace topology,Computer science,Grassmannian
Conference
1
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Zhihui Zhu110.34
Tianyu Ding222.04
Daniel P. Robinson326121.51
Manolis C. Tsakiris4509.79
rene victor valqui vidal55331260.14