Abstract | ||
---|---|---|
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget. |
Year | DOI | Venue |
---|---|---|
2017 | 10.18653/v1/d17-1309 | empirical methods in natural language processing |
DocType | Volume | Citations |
Journal | abs/1708.00214 | 5 |
PageRank | References | Authors |
0.44 | 16 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jan A. Botha | 1 | 19 | 3.08 |
Emily Pitler | 2 | 573 | 27.65 |
Ji Ma | 3 | 10 | 1.57 |
Anton Bakalov | 4 | 5 | 0.44 |
Alex Salcianu | 5 | 5 | 0.44 |
David J. Weiss | 6 | 446 | 19.11 |
Ryan McDonald | 7 | 4653 | 245.25 |
Slav Petrov | 8 | 2405 | 107.56 |