Title
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Abstract
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. To tackle the problem, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM jointly learns masked speech prediction and denoising in pre-training. By this means, WavLM does not only keep the speech content modeling capability by the masked speech prediction, but also improves the potential to non-ASR tasks by the speech denoising. In addition, WavLM employs gated relative position bias for the Transformer structure to better capture the sequence ordering of input speech. We also scale up the training dataset from 60 k hours to 94 k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Year
DOI
Venue
2022
10.1109/JSTSP.2022.3188113
IEEE Journal of Selected Topics in Signal Processing
Keywords
DocType
Volume
Self-supervised learning,speech pre-training
Journal
16
Issue
ISSN
Citations 
6
1932-4553
7
PageRank 
References 
Authors
0.42
29
18
Name
Order
Citations
PageRank
Sanyuan Chen170.42
Chengyi Wang2114.89
Zhengyang Chen385.17
Yu Wu411913.36
Shujie Liu533837.84
Zhuo Chen615324.33
Jinyu Li791572.84
Naoyuki Kanda810319.45
Takuya Yoshioka958549.20
Xiong Xiao1028134.97
Jian Wu11142.88
Jian Wu12142.88
Long Zhou132911.00
Shuo Ren1481.45
Qian Yao1552751.55
Qian Yao1652751.55
Micheal Zeng1770.42
Furu Wei181956107.57