Title
Multitask Learning With Low-Level Auxiliary Tasks For Encoder-Decoder Based Speech Recognition
Abstract
End-to-end training of deep learning-based models allows for implicit learning of intermediate representations based on the final task loss. However, the end-to-end approach ignores the useful domain knowledge encoded in explicit intermediate-level supervision. We hypothesize that using intermediate representations as auxiliary supervision at lower levels of deep networks may be a good way of combining the advantages of end-to-end training and more traditional pipeline approaches. We present experiments on conversational speech recognition where we use lower-level tasks, such as phoneme recognition, in a multitask training approach with an encoder-decoder model for direct character transcription. We compare multiple types of lower-level tasks and analyze the effects of the auxiliary tasks. Our results on the Switchboard corpus show that this approach improves recognition accuracy over a standard encoder-decoder model on the Eva12000 test set.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-1118
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
DocType
Volume
speech recognition, multitask learning, encoder-decoder, CTC, LSTM
Conference
abs/1704.01631
ISSN
Citations 
PageRank 
2308-457X
10
0.54
References 
Authors
16
4
Name
Order
Citations
PageRank
Shubham Toshniwal1194.12
Hao Tang2435.30
Liang Lu3894165.81
Karen Livescu4125471.43