Title
Transformers with convolutional context for ASR.
Abstract
The recent success of transformer networks for neural machine translation and NLP tasks has led to a surge in research work trying to apply it for speech recognition. Recent efforts studied key research questions around ways of combining positional embedding with speech features, and stability of optimization for large scale learning of transformer networks. In this paper, we propose replacing the sinusoidal positional embedding for transformers with convolutionally learned input representations. These contextual representations provide subsequent transformer blocks with relative positional information needed for discovering long-range relationships between local concepts. The proposed system has favorable optimization characteristics where our reported results are produced with fixed learning rate of 1.0 and no warmup steps. The proposed model reduces the word error rate (WER) by 12% and 16% relative to previously published work on Librispeech dev other and test other subsets respectively, when no extra LM text is provided. Full code to reproduce our results will be available online at the time of publication.
Year
Venue
DocType
2019
arXiv: Computation and Language
Journal
Citations 
PageRank 
References 
1
0.35
0
Authors
3
Name
Order
Citations
PageRank
Abdel-rahman Mohamed13772266.13
Dmytro Okhonko210.69
Luke S. Zettlemoyer33348163.34