Title
Natural Language Processing with Small Feed-Forward Networks.
Abstract
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.
Year
DOI
Venue
2017
10.18653/v1/d17-1309
empirical methods in natural language processing
DocType
Volume
Citations 
Journal
abs/1708.00214
5
PageRank 
References 
Authors
0.44
16
8
Name
Order
Citations
PageRank
Jan A. Botha1193.08
Emily Pitler257327.65
Ji Ma3101.57
Anton Bakalov450.44
Alex Salcianu550.44
David J. Weiss644619.11
Ryan McDonald74653245.25
Slav Petrov82405107.56