Title
Training Compressed Fully-Connected Networks with a Density-Diversity Penalty
Abstract
Deep models have achieved great success on a variety of challenging tasks. How- ever, the models that achieve great performance often have an enormous number of parameters, leading to correspondingly great demands on both computational and memory resources, especially for fully-connected layers. In this work, we propose a new “density-diversity penalty” regularizer that can be applied to fully-connected layers of neural networks during training. We show that using this regularizer results in significantly fewer parameters (i.e., high sparsity), and also significantly fewer distinct values (i.e., low diversity), so that the trained weight matrices can be highly compressed without any appreciable loss in performance. The resulting trained models can hence reside on computational platforms (e.g., portables, Internet-of-Things devices) where it otherwise would be prohibitive.
Year
Venue
Field
2017
international conference on learning representations
Matrix (mathematics),Computer science,Artificial intelligence,Deep learning,Artificial neural network,Machine learning
DocType
Citations 
PageRank 
Conference
1
0.37
References 
Authors
0
4
Name
Order
Citations
PageRank
Shengjie Wang110.37
Haoran Cai283.51
Jeff A. Bilmes327816.88
William Stafford Noble42907203.56