Title
Learning Less-Overlapping Representations.
Abstract
In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.
Year
Venue
DocType
2017
CoRR
Journal
Volume
Citations 
PageRank 
abs/1711.09300
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Pengtao Xie1213.62
Hongbao Zhang231.40
Bo Xing37332471.43