Title
Exploiting Simultaneous Communications to Accelerate Data Parallel Distributed Deep Learning
Abstract
Synchronous stochastic gradient descent (S-SGD) with data parallelism is widely used for training deep learning (DL) models in distributed systems. A pipelined schedule of the computing and communication tasks of a DL training job is an effective scheme to hide some communication costs. In such pipelined S-SGD, tensor fusion (i.e., merging some consecutive layers' gradients for a single communication) is a key ingredient to improve communication efficiency. However, existing tensor fusion techniques schedule the communication tasks sequentially, which overlooks their independence nature. In this paper, we expand the design space of scheduling by exploiting simultaneous All-Reduce communications. Through theoretical analysis and experiments, we show that simultaneous All-Reduce communications can effectively improve the communication efficiency of small tensors. We formulate an optimization problem of minimizing the training iteration time, in which both tensor fusion and simultaneous communications are allowed. We develop an efficient optimal scheduling solution and implement the distributed training algorithm ASC-WFBP with Horovod and PyTorch. We conduct real-world experiments on an 8-node GPU cluster of 32 GPUs with 10Gbps Ethernet. Experimental results on four modern DNNs show that ASC-WFBP can achieve about 1.09 x -2.48x speedup over the baseline without tensor fusion, and 1.15 x -1.35 x speedup over the state-of-the-art tensor fusion solution.
Year
DOI
Venue
2021
10.1109/INFOCOM42981.2021.9488803
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021)
Keywords
DocType
ISSN
Distributed Deep Learning, Communication-Efficient, Simultaneous Communications
Conference
0743-166X
Citations 
PageRank 
References 
0
0.34
0
Authors
3
Name
Order
Citations
PageRank
Shaohuai Shi161.93
Xiaowen Chu21273101.81
Baochun Li39416614.20