Abstract | ||
---|---|---|
In this work, we propose minimum Bayes risk (MBR) training of RNN-Transducer (RNN-T) for end-to-end speech recognition. Specifically, initialized with a RNN-T trained model, MBR training is conducted via minimizing the expected edit distance between the reference label sequence and on-the-fly generated N-best hypothesis. We also introduce a heuristic to incorporate an external neural network language model (NNLM) in RNN-T beam search decoding and explore MBR training with the external NNLM. Experimental results demonstrate an MBR trained model outperforms a RNN-T trained model substantially and further improvements can be achieved if trained with an external NNLM. Our best MBR trained system achieves absolute character error rate (CER) reductions of 1.2% and 0.5% on read and spontaneous Mandarin speech respectively over a strong convolution and transformer based RNN-T baseline trained on ~21,000 hours of speech. |
Year | DOI | Venue |
---|---|---|
2020 | 10.21437/Interspeech.2020-1221 | INTERSPEECH |
DocType | Citations | PageRank |
Conference | 1 | 0.35 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chao Weng | 1 | 113 | 19.75 |
Yu Chengzhu | 2 | 1 | 0.69 |
Jia Cui | 3 | 7 | 3.54 |
Chunlei Zhang | 4 | 37 | 7.43 |
Dong Yu | 5 | 6264 | 475.73 |