Title
Asynchronous Stochastic Gradient Descent with Delay Compensation for Distributed Deep Learning.
Abstract
With the fast development of deep learning, people have started to train very big neural networks using massive data. Asynchronous Stochastic Gradient Descent (ASGD) is widely used to fulfill this task, which, however, is known to suffer from the problem of delayed gradient. That is, when a local worker adds the gradient it calculates to the global model, the global model may have been updated by other workers and this gradient becomes delayed. We propose a novel technology to compensate this delay, so as to make the optimization behavior of ASGD closer to that of sequential SGD. This is done by leveraging Taylor expansion of the gradient function and efficient approximators to the Hessian matrix of the loss function. We call the corresponding new algorithm Delay Compensated ASGD (DC-ASGD). We evaluated the proposed algorithm on CIFAR-10 and ImageNet datasets, and experimental results demonstrate that DC-ASGD can outperform both synchronous SGD and ASGD, and nearly approaches the performance of sequential SGD.
Year
Venue
Field
2016
arXiv: Learning
Asynchronous communication,Stochastic gradient descent,Computer science,Artificial intelligence,Deep learning
DocType
Volume
Citations 
Journal
abs/1609.08326
4
PageRank 
References 
Authors
0.39
11
7
Name
Order
Citations
PageRank
Shuxin Zheng143.10
Qi Meng242.42
Taifeng Wang317913.33
Wei Chen4231.84
Nenghai Yu52238183.33
Zhi-Ming Ma622718.26
Tie-yan Liu74662256.32