Title
Stl-Sgd: Speeding Up Local Sgd With Stagewise Communication Period
Abstract
Distributed parallel stochastic gradient descent algorithms are workhorses for large scale machine learning tasks. Among them, local stochastic gradient descent (Local SGD) has attracted significant attention due to its low communication complexity. Previous studies prove that the communication complexity of Local SGD with a fixed or an adaptive communication period is in the order of O ((NT1/2)-T-3/2) and O ((NT3/4)-T-3/4) when the data distributions on clients are identical (IID) or otherwise (Non-IID), where N is the number of clients and T is the number of iterations. In this paper, to accelerate the convergence by reducing the communication complexity, we propose STagewise Local SGD (STL-SGD), which increases the communication period gradually along with decreasing learning rate. We prove that STL-SGD can keep the same convergence rate and linear speedup as mini-batch SGD. In addition, as the benefit of increasing the communication period, when the objective is strongly convex or satisfies the Polyak-Lojasiewicz condition, the communication complexity of STL-SGD is O(N log T) and O((NT1/2)-T-1/2) for the IID case and the Non-IID case respectively, achieving significant improvements over Local SGD. Experiments on both convex and non-convex problems demonstrate the superior performance of STL-SGD.
Year
Venue
DocType
2021
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
Conference
Volume
ISSN
Citations 
35
2159-5399
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Shen Shuheng100.34
Cheng Yifei200.34
Jingchang Liu322.06
Linli Xu479042.51