Title
XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale
Abstract
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform English-only pretraining when translating English speech into other languages, a setting which favors monolingual pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
Year
DOI
Venue
2022
10.21437/INTERSPEECH.2022-143
Conference of the International Speech Communication Association (INTERSPEECH)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
13
Name
Order
Citations
PageRank
Arun Babu100.68
Changhan Wang202.37
Andros Tjandra300.68
Kushal Lakhotia400.68
Qiantong Xu5347.42
Naman Goyal602.03
Kritika Singh700.68
Patrick von Platen811.04
Yatharth Saraf900.68
Juan Pino102112.63
Alexei Baevski11859.52
Alexis Conneau1234215.03
Michael Auli13106153.54