Title
Distributed Stochastic Gradient Descent Using Ldgm Codes
Abstract
We consider a distributed learning problem in which the computation is carried out on a system consisting of a master node and multiple worker nodes. In such systems, the existence of slow-running machines called stragglers will cause a significant decrease in performance. Recently, coding theoretic framework, which is named Gradient Coding (GC), for mitigating stragglers in distributed learning has been established by Tandon et al. Most studies on GC are aiming at recovering the gradient information completely assuming that the Gradient Descent (GD) algorithm is used as a learning algorithm. On the other hand, if the Stochastic Gradient Descent (SGD) algorithm is used, it is not necessary to completely recover the gradient information, and its unbiased estimator is sufficient for the learning. In this paper, we propose a distributed SGD scheme using Low Density Generator Matrix (LDGM) codes. In the proposed system, it may take longer time than existing GC methods to recover the gradient information completely, however, it enables the master node to obtain a high-quality unbiased estimator of the gradient at low computational cost and it leads to overall performance improvement.
Year
DOI
Venue
2019
10.1109/ISIT.2019.8849580
2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT)
Field
DocType
Volume
Stochastic gradient descent,Generator matrix,Gradient descent,Mathematical optimization,Distributed learning,Bias of an estimator,Coding (social sciences),Mathematics,Performance improvement,Computation
Journal
abs/1901.04668
Citations 
PageRank 
References 
0
0.34
2
Authors
4
Name
Order
Citations
PageRank
Shunsuke Horii103.72
Takahiro Yoshida200.34
Manabu Kobayashi3117.44
Toshiyasu Matsushima49732.76