Title
Distributed Training of Deep Neural Network Acoustic Models for Automatic Speech Recognition: A comparison of current training strategies
Abstract
The past decade has witnessed great progress in automatic speech recognition (ASR) due to advances in deep learning. The improvements in performance can be attributed to both improved models and large-scale training data. The key to training such models is the employment of efficient distributed learning techniques. In this article, we provide an overview of distributed training techniques for deep neural network (DNN) acoustic models used for ASR. Starting with the fundamentals of data parallel stochastic gradient descent (SGD) and ASR acoustic modeling, we investigate various distributed training strategies and their realizations in high-performance computing (HPC) environments with an emphasis on striking a balance between communication and computation. Experiments are carried out on a popular public benchmark to study the convergence, speedup, and recognition performance of the investigated strategies.
Year
DOI
Venue
2020
10.1109/MSP.2020.2969859
IEEE Signal Processing Magazine
DocType
Volume
Issue
Journal
37
3
ISSN
Citations 
PageRank 
1053-5888
2
0.38
References 
Authors
0
6
Name
Order
Citations
PageRank
Xiaodong Cui141040.82
Wei Zhang234519.04
Ulrich Finkler3658.62
George Saon482580.99
Michael Picheny51461920.15
David S. Kung616620.93