Title
Fully Quantized Transformer for Machine Translation.
Abstract
State-of-the-art neural machine translation methods employ massive amounts of parameters. Drastically reducing computational costs of such methods without affecting performance has been up to this point unsuccessful. To this end, we propose FullyQT: an all-inclusive quantization strategy for the Transformer. To the best of our knowledge, we are the first to show that it is possible to avoid any loss in translation quality with a fully quantized Transformer. Indeed, compared to full-precision, our 8-bit models score greater or equal BLEU on most tasks. Comparing ourselves to all previously proposed methods, we achieve state-of-the-art quantization results.
Year
DOI
Venue
2020
10.18653/V1/2020.FINDINGS-EMNLP.1
EMNLP
DocType
Volume
Citations 
Conference
2020.findings-emnlp
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Gabriele Prato100.34
Ella Charlaix200.34
Mehdi Rezagholizadeh338.82