Title
Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning
Abstract
We consider distributed gradient descent in the presence of stragglers. Recent work on gradient coding and approximate gradient coding have shown how to add redundancy in distributed gradient descent to guarantee convergence even if some workers are stragglers-that is, slow or non-responsive. In this work we propose an approximate gradient coding scheme called Stochastic Gradient Coding (SGC), which works when the stragglers are random. SGC distributes data points redundantly to workers according to a pair-wise balanced design, and then simply ignores the stragglers. We prove that the convergence rate of SGC mirrors that of batched Stochastic Gradient Descent (SGD) for the ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> loss function, and show how the convergence rate can improve with the redundancy. We also provide bounds for more general convex loss functions. We show empirically that SGC requires a small amount of redundancy to handle a large number of stragglers and that it can outperform existing approximate gradient codes when the number of stragglers is large.
Year
DOI
Venue
2019
10.1109/JSAIT.2020.2991361
IEEE Journal on Selected Areas in Information Theory
Keywords
DocType
Volume
Distributed computing,straggler mitigation,stochastic gradient descent,machine learning algorithms,convergence analysis
Journal
1
Issue
Citations 
PageRank 
1
2
0.44
References 
Authors
0
3
Name
Order
Citations
PageRank
rawad bitar1141.97
Mary Wootters217225.99
Salim El Rouayheb3133.43