Title
Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Abstract
We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition. Several modeling choices are discussed in this work, including various positional embedding methods and an iterated loss to enable training deep transformers. We also present a preliminary study of using limited right context in transformer models, which makes it possible for streaming applications. We demonstrate that on the widely used Librispeech benchmark, our transformer-based AM outperforms the best published hybrid result by 19% to 26% relative when the standard n-gram language model (LM) is used. Combined with neural network LM for rescoring, our proposed approach achieves state-of-the-art results on Librispeech. Our findings are also confirmed on a much larger internal dataset.
Year
DOI
Venue
2020
10.1109/ICASSP40776.2020.9054345
ICASSP
DocType
Citations 
PageRank 
Conference
2
0.39
References 
Authors
0
13
Name
Order
Citations
PageRank
Yongqiang Wang117513.32
Abdel-rahman Mohamed23772266.13
Duc-Vinh Le34515.88
Chunxi Liu484.20
Xiao Alex532.44
Jay Mahadeokar694.94
Huang Hongzhao720.39
Andros Tjandra8169.15
Xiaohui Zhang919419.81
Frank Zhang10106.00
Christian Fuegen1196.58
Geoffrey Zweig123406320.25
Michael L. Seltzer13102769.42