Title
Cross-lingual Language Model Pretraining
Abstract
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models are publicly available(1).
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
multiple languages,absolute gain
Field
DocType
Volume
Cross lingual,Computer science,Natural language processing,Artificial intelligence,Language model,Machine learning
Conference
32
ISSN
Citations 
PageRank 
1049-5258
6
0.42
References 
Authors
0
2
Name
Order
Citations
PageRank
Alexis Conneau134215.03
Guillaume Lample265122.75