Title
Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platform.
Abstract
The training process of Deep Neural Network (DNN) is compute-intensive, often taking days to weeks to train a DNN model. Therefore, parallel execution of DNN training on GPUs is a widely adopted approach to speed up process nowadays. Due to the implementation simplicity, data parallelism is currently the most commonly used parallelization method. Nonetheless, data parallelism suffers from excessive inter-GPU communication overhead due to frequent weight synchronization among GPUs. Another approach is model parallelism, which partitions model among GPUs. This approach can significantly reduce inter-GPU communication cost compared to data parallelism, however, maintaining load balance is a challenge. Moreover, model parallelism faces the staleness issue; that is, gradients are computed with stale weights. In this paper, we propose a novel model parallelism method, which achieves load balance by concurrently executing forward and backward passes of two batches, and resolves the staleness issue with weight prediction. The experimental results show that our proposal achieves up to 15.77x speedup compared to data parallelism and up to 2.18x speedup compared to the state-of-the-art model parallelism method without incurring accuracy loss.
Year
Venue
Field
2018
arXiv: Distributed, Parallel, and Cluster Computing
Synchronization,Load balancing (computing),Computer science,Parallel computing,Data parallelism,Artificial neural network,Speedup,Distributed computing
DocType
Volume
Citations 
Journal
abs/1809.02839
1
PageRank 
References 
Authors
0.35
24
3
Name
Order
Citations
PageRank
Chi-Chung Chen1101.94
Chia-Lin Yang2103376.39
Hsiang-Yun Cheng3616.07