Abstract | ||
---|---|---|
Multilingual pretrained language models (such as multilingual BERT) have achieved impressive results for cross-lingual transfer. However, due to the constant model capacity, multilingual pre-training usually lags behind the monolingual competitors. In this work, we present two approaches to improve zero-shot cross-lingual classification, by transferring the knowledge from monolingual pretrained models to multilingual ones. Experimental results on two cross-lingual classification benchmarks show that our methods outperform vanilla multilingual fine-tuning. |
Year | Venue | DocType |
---|---|---|
2020 | AACL/IJCNLP | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chi Zewen | 1 | 0 | 0.34 |
Li Dong | 2 | 582 | 31.86 |
Furu Wei | 3 | 1956 | 107.57 |
Xian-Ling Mao | 4 | 99 | 25.19 |
Heyan Huang | 5 | 173 | 61.47 |