Title
Distributed Deep Learning Strategies For Automatic Speech Recognition
Abstract
In this paper, we propose and investigate a variety of distributed deep learning strategies for automatic speech recognition (ASR) and evaluate them with a state-of-the-art Long short-term memory (LSTM) acoustic model on the 2000-hour Switchboard (SWB2000), which is one of the most widely used datasets for ASR performance benchmark. We first investigate what are the proper hyper-parameters (e.g., learning rate) to enable the training with sufficiently large batch size without impairing the model accuracy. We then implement various distributed strategies, including Synchronous (SYNC), Asynchronous Decentralized Parallel SGD (ADPSGD) and the hybrid of the two HYBRID, to study their runtime/accuracy trade-off. We show that we can train the LSTM model using ADPSGD in 14 hours with 16 NVIDIA P100 GPUs to reach a 7.6% WER on the Hub5-2000 Switchboard (SWB) test set and a 13.1% WER on the Call Home (CH) test set. Furthermore, we can train the model using HYBRID in 11.5 hours with 32 NVIDIA V100 GPUs without loss in accuracy.
Year
DOI
Venue
2019
10.1109/icassp.2019.8682888
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Keywords
Field
DocType
automatic speech recognition, LSTM, deep learning, parallel computing, switchboard
Asynchronous communication,Task analysis,Computer science,CUDA,Speech recognition,Artificial intelligence,Deep learning,sync,Hidden Markov model,Test set,Acoustic model
Journal
Volume
ISSN
Citations 
abs/1904.04956
1520-6149
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Wei Zhang134519.04
Xiaodong Cui241040.82
Ulrich Finkler3658.62
B. Kingsbury44175335.43
George Saon582580.99
David S. Kung616620.93
Michael Picheny71461920.15