Abstract | ||
---|---|---|
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world. |
Year | DOI | Venue |
---|---|---|
2022 | 10.21437/INTERSPEECH.2022-143 | Conference of the International Speech Communication Association (INTERSPEECH) |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 13 |
Name | Order | Citations | PageRank |
---|---|---|---|
Arun Babu | 1 | 0 | 0.68 |
Changhan Wang | 2 | 0 | 2.37 |
Andros Tjandra | 3 | 0 | 0.68 |
Kushal Lakhotia | 4 | 0 | 0.68 |
Qiantong Xu | 5 | 34 | 7.42 |
Naman Goyal | 6 | 0 | 2.03 |
Kritika Singh | 7 | 0 | 0.68 |
Patrick von Platen | 8 | 1 | 1.04 |
Yatharth Saraf | 9 | 0 | 0.68 |
Juan Pino | 10 | 21 | 12.63 |
Alexei Baevski | 11 | 85 | 9.52 |
Alexis Conneau | 12 | 342 | 15.03 |
Michael Auli | 13 | 1061 | 53.54 |