Title
Toward Model Parallelism for Deep Neural Network Based on Gradient-Free ADMM Framework
Abstract
Alternating Direction Method of Multipliers (ADMM) has recently been proposed as a potential alternative optimizer to the Stochastic Gradient Descent(SGD) for deep learning problems. This is because ADMM can solve gradient vanishing and poor conditioning problems. Moreover, it has shown good scalability in many large-scale deep learning applications. However, there still lacks a parallel ADMM computational framework for deep neural networks because of layer dependency among variables. In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel. The convergence of the proposed pdADMM to a critical point is theoretically proven under mild conditions. The convergence rate of the pdADMM is proven to be o(1/k) where k is the number of iterations. Extensive experiments on six benchmark datasets demonstrated that our proposed pdADMM can lead to more than 10 times speedup for training large-scale deep neural networks, and outperformed most of the comparison methods. Our code is available at: https://github.com/xianggebenben/pdADMM.
Year
DOI
Venue
2020
10.1109/ICDM50108.2020.00068
2020 IEEE International Conference on Data Mining (ICDM)
Keywords
DocType
ISSN
Model Parallelism, Deep Neural Network, Alternating Direction Method of Multipliers, Convergence
Conference
1550-4786
ISBN
Citations 
PageRank 
978-1-7281-8317-6
0
0.34
References 
Authors
37
4
Name
Order
Citations
PageRank
Junxiang Wang1356.68
Chai Zheng2121.39
Yue Cheng3759.77
Liang Zhao438654.50