Title
Predicting Parameters in Deep Learning.
Abstract
We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.
Year
Venue
DocType
2013
neural information processing systems
Conference
Volume
Citations 
PageRank 
abs/1306.0543
155
9.82
References 
Authors
24
5
Search Limit
100155
Name
Order
Citations
PageRank
Misha Denil139726.18
Babak Shakibi217110.60
Laurent Dinh357027.53
Marc'Aurelio Ranzato45242470.94
Nando De Freitas53284273.68