Title
Unified Speech-Text Pre-training for Speech Translation and Recognition
Abstract
We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition. The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning. A self-supervised speech subtask leverages un-labelled speech data, and a (self-)supervised text to text subtask makes use of abundant text training data. Two auxiliary supervised speech tasks are included to unify speech and text modeling space. Our contribution lies in integrating linguistic information from the text corpus into the speech pre-training. Detailed analysis reveals learning interference among subtasks. Two pre-training configurations for speech translation and recognition, respectively, are presented to alleviate subtask interference. Our experiments show the proposed method can effectively fuse speech and text information into one model. It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MUST-C speech translation dataset and comparable WERs to wav2vec 2.0 on the LIBRISPEECH speech recognition task. (1)
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.105
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
11
Name
Order
Citations
PageRank
Yun Tang100.68
Hongyu Gong201.01
Ning Dong300.34
Changhan Wang402.37
Wei-Ning Hsu511513.93
Jiatao Gu627422.59
Alexei Baevski7859.52
Xian Li813616.76
Abdel-rahman Mohamed93772266.13
Michael Auli10106153.54
Juan Pino112112.63