Title
A Comparison of Techniques for Language Model Integration in Encoder-Decoder Speech Recognition.
Abstract
Attention-based recurrent neural encoder-decoder models present an elegant solution to the automatic speech recognition problem. This approach folds the acoustic model, pronunciation model, and language model into a single network and requires only a parallel corpus of speech and text for training. However, unlike in conventional approaches that combine separate acoustic and language models, it is not clear how to use additional (unpaired) text. While there has been previous work on methods addressing this problem, a thorough comparison among methods is still lacking. In this paper, we compare a suite of past methods and some of our own proposed methods for using unpaired text data to improve encoder-decoder models. For evaluation, we use the medium-sized Switchboard data set and the large-scale Google voice search and dictation data sets. Our results confirm the benefits of using unpaired text across a range of methods and data sets. Surprisingly, for first-pass decoding, the rather simple approach of shallow fusion performs best across data sets. However, for Google data sets we find that cold fusion has a lower oracle error rate and outperforms other approaches after second-pass rescoring on the Google voice search data set.
Year
DOI
Venue
2018
10.1109/slt.2018.8639038
2018 IEEE Spoken Language Technology Workshop (SLT)
Keywords
DocType
Volume
Decoding,Computational modeling,Training,Mathematical model,Data models,Speech recognition,Acoustics
Conference
abs/1807.10857
ISSN
Citations 
PageRank 
2639-5479
4
0.40
References 
Authors
11
6
Name
Order
Citations
PageRank
Shubham Toshniwal1194.12
Anjuli Kannan2907.17
Chung-Cheng Chiu324828.00
Yonghui Wu4106572.78
Tara N. Sainath53497232.43
Karen Livescu6125471.43