Title
Scaling Neural Machine Translation.
Abstract
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. On WMTu002714 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMTu002714 English-French task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.
Year
DOI
Venue
2018
10.18653/v1/w18-6301
WMT
DocType
Volume
Citations 
Conference
abs/1806.00187
22
PageRank 
References 
Authors
0.65
18
4
Name
Order
Citations
PageRank
Myle Ott152426.11
Sergey Edunov220410.37
David Grangier381641.60
Michael Auli4106153.54