Title
Toward Understanding the Impact of Staleness in Distributed Machine Learning.
Abstract
Many distributed machine learning (ML) systems adopt the non-synchronous execution in order to alleviate the network communication bottleneck, resulting in stale parameters that do not reflect the latest updates. Despite much development in large-scale ML, the effects of staleness on learning are inconclusive as it is challenging to directly monitor or control staleness in complex distributed environments. In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates. Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature. The empirical findings also inspire a new convergence analysis of stochastic gradient descent in non-convex optimization under staleness, matching the best-known convergence rate of O(1/sqrt{T}).
Year
Venue
Field
2018
ICLR
Computer science,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1810.03264
1
PageRank 
References 
Authors
0.35
0
5
Name
Order
Citations
PageRank
Wei Dai133312.77
Yi Zhou26517.55
Nanqing Dong3263.53
Hao Zhang43037115.96
Bo Xing57332471.43