Title
MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms.
Abstract
Distributed synchronous stochastic gradient descent has been widely used to train deep neural networks on computer clusters. With the increase of computational power, network communications have become one limiting factor on the system scalability. In this paper, we observe that many deep neural networks have a large number of layers with only a small amount of data to be communicated. Based on the fact that merging some short communication tasks into a single one may reduce the overall communication time, we formulate an optimization problem to minimize the training iteration time. We develop an optimal solution named merged-gradient wait-free backpropagation (MG-WFBP) and implement it in our open-source deep learning platform B-Caffe. Our experimental results on an 8-node GPU cluster with 10GbE interconnect and trace-based simulation results on a 64-node cluster both show that the MG-WFBP algorithm can achieve much better scaling efficiency than existing methods WFBP and SyncEASGD.
Year
DOI
Venue
2018
10.1109/infocom.2019.8737367
IEEE INFOCOM 2019 - IEEE Conference on Computer Communications
Keywords
Field
DocType
Training,Data communication,Backpropagation,Computational modeling,Data models,Neural networks,Hardware
Stochastic gradient descent,GPU cluster,Computer science,Limiting factor,Algorithm,Artificial intelligence,Deep learning,Interconnection,Optimization problem,Computer cluster,Scalability,Distributed computing
Journal
Volume
ISSN
Citations 
abs/1811.11141
0743-166X
7
PageRank 
References 
Authors
0.45
0
2
Name
Order
Citations
PageRank
Bo Li157845.93
Xiaowen Chu21273101.81