Title
On The Acceleration Of Deep Learning Model Parallelism With Staleness
Abstract
Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices. The inefficiency is caused by the backpropagation algorithm's forward locking, backward locking, and update locking problems. Existing solutions for acceleration either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency. Moreover; none of them consider the straggler problem among devices. In this paper; we propose Layer-wise Staleness and a novel efficient training algorithm, Diversely Stale Parameters (DSP), to address these challenges. We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems. Finally, extensive experimental results on training deep learning models demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness than compared methods.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.00216
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
DocType
ISSN
Citations 
Conference
1063-6919
1
PageRank 
References 
Authors
0.39
21
3
Name
Order
Citations
PageRank
An Xu122.77
Zhouyuan Huo28112.16
Heng Huang33080203.21