Abstract | ||
---|---|---|
In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance. |
Year | Venue | DocType |
---|---|---|
2017 | CoRR | Journal |
Volume | Citations | PageRank |
abs/1711.09300 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Pengtao Xie | 1 | 21 | 3.62 |
Hongbao Zhang | 2 | 3 | 1.40 |
Bo Xing | 3 | 7332 | 471.43 |