Title
Auto-Sizing Neural Networks: With Applications to n-gram Language Models
Abstract
Neural networks have been shown to improve performance across a range of natural-language tasks. However, designing and training them can be complicated. Frequently, researchers resort to repeated experimentation to pick optimal settings. In this paper, we address the issue of choosing the correct number of units in hidden layers. We introduce a method for automatically adjusting network size by pruning out hidden units through $\ell_{\infty,1}$ and $\ell_{2,1}$ regularization. We apply this method to language modeling and demonstrate its ability to correctly choose the number of hidden units while maintaining perplexity. We also include these models in a machine translation decoder and show that these smaller neural models maintain the significant improvements of their unpruned versions.
Year
Venue
Field
2015
Conference on Empirical Methods in Natural Language Processing
Network size,Perplexity,Computer science,Machine translation,Regularization (mathematics),Sizing,n-gram,Natural language processing,Artificial intelligence,Artificial neural network,Machine learning,Language model
DocType
Volume
Citations 
Journal
abs/1508.05051
5
PageRank 
References 
Authors
0.45
10
2
Name
Order
Citations
PageRank
Kenton Murray150.45
David Chiang22843144.76