Title
Improving Rare Word Recognition with LM-aware MWER Training
Abstract
Language models (LMs) significantly improve the recognition accuracy of end-to-end (E2E) models on words rarely seen during training, when used in either the shallow fusion or the rescoring setups. In this work, we introduce LMs in the learning of hybrid autoregressive transducer (HAT) models in the discriminative training framework, to mitigate the training versus inference gap regarding the use of LMs. For the shallow fusion setup, we use LMs during both hypotheses generation and loss computation, and the LM-aware MWER-trained model achieves 10\% relative improvement over the model trained with standard MWER on voice search test sets containing rare words. For the rescoring setup, we learn a small neural module to generate per-token fusion weights in a data-dependent manner. This model achieves the same rescoring WER as regular MWER-trained model, but without the need for sweeping fusion weights.
Year
DOI
Venue
2022
10.21437/INTERSPEECH.2022-10660
Conference of the International Speech Communication Association (INTERSPEECH)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
13
Name
Order
Citations
PageRank
Weiran Wang11149.99
Tongzhou Chen201.01
Tara N. Sainath33497232.43
Ehsan Variani4917.39
Rohit Prabhavalkar516322.56
Ronny Huang600.34
Bhuvana Ramabhadran71779153.83
Neeraj Gaur8101.35
Sepand Mavandadi901.35
Cal Peyser1001.69
Trevor Strohman1102.70
Yanzhang He126416.36
David Rybach1301.01