Title
Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks
Abstract
We present Unicoder, a universal language encoder that is insensitive to different languages. Given an arbitrary NLP task, a model can be trained with Unicoder using training data in one language and directly applied to inputs of the same task in other languages. Comparing to similar efforts such as Multilingual BERT and XLM, three new cross-lingual pre-training tasks are proposed, including cross-lingual word recovery, cross-lingual paraphrase classification and cross-lingual masked language model. These tasks help Unicoder learn the mappings among different languages from more perspectives. We also find that doing fine-tuning on multiple languages together can bring further improvement. Experiments are performed on two tasks: cross-lingual natural language inference (XNLI) and cross-lingual question answering (XQA), where XLM is our baseline. On XNLI, 1.8% averaged accuracy improvement (on 15 languages) is obtained. On XQA, which is a new cross-lingual dataset built by us, 5.5% averaged accuracy improvement (on French and German) is obtained.
Year
DOI
Venue
2019
10.18653/v1/D19-1252
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
D19-1
1
PageRank 
References 
Authors
0.36
0
7
Name
Order
Citations
PageRank
Haoyang Huang112.05
Yaobo Liang213.40
Nan Duan321345.87
Ming Gong41711.45
Linjun Shou51310.73
Daxin Jiang6131672.60
Ming Zhou74262251.74