Title
Injecting Text in Self-Supervised Speech Pretraining
Abstract
Self-supervised pretraining for Automated Speech Recognition (ASR) has shown varied degrees of success. In this paper, we propose to jointly learn representations during pretraining from two different modalities: speech and text. The proposed method, tts4pretrain complements the power of contrastive learning in self-supervision with linguistic/lexical representations derived from synthesized speech, effectively learning from untranscribed speech and unspoken text. Lexical learning in the speech encoder is enforced through an additional sequence loss term that is coupled with contrastive loss during pretraining. We demonstrate that this novel pretraining method yields Word Error Rate (WER) reductions of 10% relative on the well-benchmarked, Librispeech task over a state-of-the-art baseline pretrained with wav2vec2.0 only. The proposed method also serves as an effective strategy to compensate for the lack of transcribed speech, effectively matching the performance of 5000 hours of transcribed speech with just 100 hours of transcribed speech on the AMI meeting transcription task. Finally, we demonstrate WER reductions of up to 15% on an inhouse Voice Search task over traditional pretraining. Incorporating text into encoder pretraining is complimentary to rescoring with a larger or in-domain language model, resulting in additional 6% relative reduction in WER.
Year
DOI
Venue
2021
10.1109/ASRU51503.2021.9688018
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
Keywords
DocType
ISBN
Speech Recognition,Speech Synthesis,Self-supervised,Representation learning
Conference
978-1-6654-3740-0
Citations 
PageRank 
References 
1
0.35
0
Authors
6
Name
Order
Citations
PageRank
Zhehuai Chen1113.89
Yu Zhang244241.79
Andrew Rosenberg372.09
Bhuvana Ramabhadran410.35
Gary Wang592.86
Pedro Moreno610.35