Title
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Abstract
In this paper, we introduce ELECTRA-style tasks (Clark et al., 2020b) to cross-lingual language model pre-training. Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection. Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora. Our model outperforms the baseline models on various cross-lingual understanding tasks with much less computation cost. Moreover, analysis shows that XLM-E tends to obtain better cross-lingual transferability.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.427
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2
PageRank 
References 
Authors
0.36
0
11
Name
Order
Citations
PageRank
Zewen Chi122.39
Shaohan Huang25710.29
Li Dong358231.86
Shuming Ma48315.92
Bo Zheng51210.73
Saksham Singhal621.71
Payal Bajaj761.44
Xia Song8303.19
Xian-Ling Mao99925.19
Heyan Huang1017361.47
Furu Wei111956107.57