Title
Convergence Analysis of Distributed Stochastic Gradient Descent with Shuffling.
Abstract
When using stochastic gradient descent (SGD) to solve large-scale machine learning problems especially deep learning problems, a common practice of data processing is to shuffle the training data, partition the data across multiple threads/machines if needed, and then perform several epochs of training on the re-shuffled (either locally or globally) data. The above procedure makes the instances used to compute the gradients no longer independently sampled from the training data set, which contradicts with the basic assumptions of conventional convergence analysis of SGD. Then does the distributed SGD method have desirable convergence properties in this practical situation? In this paper, we give answers to this question. First, we give a mathematical formulation for the practical data processing procedure in distributed machine learning, which we call (data partition with) global/local shuffling. We observe that global shuffling is equivalent to without-replacement sampling if the shuffling operations are independent. Second, we prove SGD with global shuffling and local shuffling has convergence guarantee for non-convex tasks like deep learning. The convergence rate for local shuffling is slower than that for global shuffling, since it will lose some information if there’s no communication between partitioned data. We also consider the situation when the permutation after shuffling is not uniformly distributed (We call it insufficient shuffling), and discuss the condition under which this insufficiency will not influence the convergence rate. Finally, we give the convergence analysis in convex case. An interesting finding is that, the non-convex tasks like deep learning are more suitable to apply shuffling comparing to the convex tasks. Our theoretical results provide important insights to large-scale machine learning, especially in the selection of data processing methods in order to achieve faster convergence and good speedup. Our theoretical findings are verified by extensive experiments on logistic regression and deep neural networks.
Year
DOI
Venue
2017
10.1016/j.neucom.2019.01.037
Neurocomputing
Keywords
Field
DocType
Deep learning,Stochastic gradient descent,Distributed computing,Non-convex optimization,Shuffling
Gradient method,Convergence (routing),Mathematical optimization,Gradient descent,Stochastic gradient descent,Shuffling,Rate of convergence,Artificial intelligence,Deep learning,Mathematics,Speedup
Journal
Volume
ISSN
Citations 
337
0925-2312
8
PageRank 
References 
Authors
0.70
7
5
Name
Order
Citations
PageRank
Meng Qi1308.47
Wei Chen216614.55
Wang, Yue3101.39
Zhi-Ming Ma422718.26
Tie-yan Liu54662256.32