Abstract | ||
---|---|---|
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. |
Year | Venue | Field |
---|---|---|
2013 | International Conference on Learning Representations | Mathematical optimization,Non-negative matrix factorization,Artificial intelligence,Coordinate descent,Machine learning,Mathematics,Computation |
DocType | Volume | Citations |
Journal | abs/1301.3527 | 7 |
PageRank | References | Authors |
0.51 | 17 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vamsi K. Potluru | 1 | 12 | 4.03 |
Sergey M. Plis | 2 | 189 | 25.08 |
Jonathan Le Roux | 3 | 839 | 68.14 |
Barak A. Pearlmutter | 4 | 1963 | 567.26 |
Vince D Calhoun | 5 | 2769 | 268.91 |
Thomas P. Hayes | 6 | 659 | 54.21 |