Title
Improving Word Embedding Factorization for Compression using Distilled Nonlinear Neural Decomposition.
Abstract
Word-embeddings are vital components of Natural Language Processing (NLP) models and have been extensively explored. However, they consume a lot of memory which poses a challenge for edge deployment. Embedding matrices, typically, contain most of the parameters for language models and about a third for machine translation systems. In this paper, we propose Distilled Embedding, an (input/output) embedding compression method based on low-rank matrix decomposition and knowledge distillation. First, we initialize the weights of our decomposed matrices by learning to reconstruct the full pre-trained word-embedding and then fine-tune end-to-end, employing knowledge distillation on the factorized embedding. We conduct extensive experiments with various compression rates on machine translation and language modeling, using different data-sets with a shared word-embedding matrix for both embedding and vocabulary projection matrices. We show that the proposed technique is simple to replicate, with one fixed parameter controlling compression size, has higher BLEU score on translation and lower perplexity on language modeling compared to complex, difficult to tune state-of-the-art methods.
Year
DOI
Venue
2020
10.18653/V1/2020.FINDINGS-EMNLP.250
EMNLP
DocType
Volume
Citations 
Conference
2020.findings-emnlp
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Vasileios Lioutas112.09
Ahmad Azad Ab Rashid235.03
Krtin Kumar300.68
Md. Akmal Haidar4286.32
Mehdi Rezagholizadeh538.82