Title
IMPROVED NEURAL LANGUAGE MODEL FUSION FOR STREAMING RECURRENT NEURAL NETWORK TRANSDUCER
Abstract
Recurrent Neural Network Transducer (RNN-T), like most end-to-end speech recognition model architectures, has an implicit neural network language model (NNLM) and cannot easily leverage unpaired text data during training. Previous work has proposed various fusion methods to incorporate external NNLMs into end-to-end ASR to address this weakness. In this paper, we propose extensions to these techniques that allow RNN-T to exploit external NNLMs during both training and inference time, resulting in 13-18% relative Word Error Rate improvement on Librispeech compared to strong baselines. Furthermore, our methods do not incur extra algorithmic latency and allow for flexible plug-and-play of different NNLMs without re-training. We also share in-depth analysis to better understand the benefits of the different NNLM fusion methods. Our work provides a reliable technique for leveraging unpaired text data to significantly improve RNN-T while keeping the system streamable, flexible, and lightweight.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9414784
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
RNN-T, language model fusion, streaming end-to-end speech recognition, leveraging unpaired text
Conference
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Suyoun Kim1356.15
Yuan Shangguan212.04
Jay Mahadeokar394.94
Antoine Bruguier463.50
Christian Fuegen596.58
Michael L. Seltzer6102769.42
Duc-Vinh Le74515.88