Title
Breaking (Global) Barriers in Parallel Stochastic Optimization With Wait-Avoiding Group Averaging
Abstract
Deep learning at scale is dominated by communication time. Distributing samples across nodes usually yields the best performance, but poses scaling challenges due to global information dissemination and load imbalance across uneven sample lengths. State-of-the-art decentralized optimizers mitigate the problem, but require more iterations to achieve the same accuracy as their globally-communicating counterparts. We present Wait-Avoiding Group Model Averaging (WAGMA) SGD, a wait-avoiding stochastic optimizer that reduces global communication via subgroup weight exchange. The key insight is a combination of algorithmic changes to the averaging scheme and the use of a group allreduce operation. We prove the convergence of WAGMA-SGD, and empirically show that it retains convergence rates similar to Allreduce-SGD. For evaluation, we train ResNet-50 on ImageNet; Transformer for machine translation; and deep reinforcement learning for navigation at scale. Compared with state-of-the-art decentralized SGD variants, WAGMA-SGD significantly improves training throughput (e.g., 2.1× on 1,024 GPUs for reinforcement learning), and achieves the fastest time-to-solution (e.g., the highest score using the shortest training time for Transformer).
Year
DOI
Venue
2021
10.1109/TPDS.2020.3040606
IEEE Transactions on Parallel and Distributed Systems
Keywords
DocType
Volume
Stochastic gradient descent,distributed deep learning,decentralized optimization
Journal
32
Issue
ISSN
Citations 
7
1045-9219
1
PageRank 
References 
Authors
0.36
0
7
Name
Order
Citations
PageRank
Shigang Li173.19
Tal Ben-Nun211614.21
Giorgi Nadiradze311.03
Salvatore Di Girolamo4306.00
Nikoli Dryden521.04
Dan Alistarh634142.64
Torsten Hoefler731.07