Title | ||
---|---|---|
MULTITASK LEARNING AND JOINT OPTIMIZATION FOR TRANSFORMER-RNN-TRANSDUCER SPEECH RECOGNITION |
Abstract | ||
---|---|---|
Recently, several types of end-to-end speech recognition methods named transformer-transducer were introduced. According to those kinds of methods, transcription networks are generally modeled by transformer-based neural networks, while prediction networks could be modeled by either transformers or recurrent neural networks (RNN). In this paper, we propose novel multitask learning, joint optimization, and joint decoding methods for transformer-RNN-transducer systems. Our proposed methods have the main advantage in that the model can maintain information on the large text corpus eliminating the necessity of an external language model (LM). We prove their effectiveness by performing experiments utilizing the well-known ESPNET toolkit for the widely used Librispeech datasets. We also show that the proposed methods can reduce word error rate (WER) by 16.6 % and 13.3 % for test-clean and test-other datasets, respectively, without changing the overall model structure nor exploiting an external LM. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICASSP39728.2021.9414911 | 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) |
Keywords | DocType | Citations |
speech recognition, transducer, transformer, joint optimization, multitask learning, language model, connectionist temporal classification, joint decoding | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jae-Jin Jeon | 1 | 0 | 0.68 |
Eesung Kim | 2 | 1 | 1.73 |