Title
Online PCA with Optimal Regrets
Abstract
We carefully investigate the online version of PCA, where in each trial a learning algorithm plays a k-dimensional subspace, and suffers the compression loss on the next instance when projected into the chosen subspace. In this setting, we give regret bounds for two popular online algorithms, Gradient Descent (GD) and Matrix Exponentiated Gradient (MEG). We show that both algorithms are essentially optimal in the worst-case when the regret is expressed as a function of the number of trials. This comes as a surprise, since MEG is commonly believed to perform sub-optimally when the instances are sparse. This different behavior of MEG for PCA is mainly related to the non-negativity of the loss in this case, which makes the PCA setting qualitatively different from other settings studied in the literature. Furthermore, we show that when considering regret bounds as a function of a loss budget, MEG remains optimal and strictly outperforms GD. Next, we study a generalization of the online PCA problem, in which the Nature is allowed to play with dense instances, which are positive matrices with bounded largest eigenvalue. Again we can show that MEG is optimal and strictly better than GD in this setting.
Year
DOI
Venue
2013
10.1007/978-3-642-40935-6_8
ALGORITHMIC LEARNING THEORY (ALT 2013)
Keywords
DocType
Volume
Online learning,regret bounds,expert setting,k-sets,PCA,Gradient Descent and Matrix Exponentiated Gradient algorithms
Journal
8139
Issue
ISSN
Citations 
1
0302-9743
4
PageRank 
References 
Authors
0.50
8
3
Name
Order
Citations
PageRank
Jiazhong Nie1454.72
Wojciech Kotlowski215816.32
Manfred K. Warmuth361051975.48