Title
Near-Optimal Straggler Mitigation For Distributed Gradient Methods
Abstract
Modern learning algorithms use gradient descent updates to train inferential models that best explain data. Scaling these approaches to massive data sizes requires proper distributed gradient descent schemes where distributed worker nodes compute partial gradients based on their partial and local data sets, and send the results to a master node where all the computations are aggregated into a full gradient and the learning model is updated. However, a major performance bottleneck that arises is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. We propose a distributed computing scheme, called Batched Coupon's Collector (BCC) to alleviate the effect of stragglers in gradient methods. We prove that our BCC scheme is robust to a near optimal number of random stragglers. We also empirically demonstrate that our proposed BCC scheme reduces the run-time by up to 85.4% over Amazon EC2 clusters when compared with other straggler mitigation strategies. We also generalize the proposed BCC scheme to minimize the completion time when implementing gradient descent-based algorithms over heterogeneous worker nodes.
Year
DOI
Venue
2018
10.1109/IPDPSW.2018.00137
2018 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2018)
DocType
Volume
ISSN
Conference
abs/1710.09990
2164-7062
Citations 
PageRank 
References 
11
0.59
8
Authors
4
Name
Order
Citations
PageRank
Songze Li113416.22
Seyed Mohammadreza Mousavi Kalan2141.99
Amir Salman Avestimehr31880157.39
Mahdi Soltanolkotabi440925.97