Title
Slim Embedding Layers for Recurrent Neural Language Models.
Abstract
Recurrent neural language models are the state-of-the-art models for language modeling. When the vocabulary size is large, the space taken to store the model parameters becomes the bottleneck for the use of recurrent neural language models. In this paper, we introduce a simple space compression method that randomly shares the structured parameters at both the input and output embedding layers of the recurrent neural language models to significantly reduce the size of model parameters, but still compactly represent the original input and output embedding layers. The method is easy to implement and tune, Experiments on several data sets show that the new method can get similar perplexity and BLEU score results while only using a very tiny fraction of parameters.
Year
Venue
DocType
2018
THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
Conference
Volume
Citations 
PageRank 
abs/1711.09873
0
0.34
References 
Authors
21
5
Name
Order
Citations
PageRank
Zhongliang Li100.68
Raymond Kulhanek200.34
Shaojun Wang346838.96
Yunxin Zhao4807121.74
Shuang Wu511216.94