Title
Distributed Stochastic Optimization via Adaptive Stochastic Gradient Descent.
Abstract
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial algorithm that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic optimization method based on adaptive step sizes and variance reduction techniques. We achieve a linear speedup in the number of machines, small memory footprint, and only a small number of synchronization rounds -- logarithmic in dataset size -- in which the computation nodes communicate with each other. Critically, our approach is a general reduction than parallelizes any serial SGD algorithm, allowing us to leverage the significant progress that has been made in designing adaptive SGD algorithms. We conclude by implementing our algorithm in the Spark distributed framework and exhibit dramatic performance gains on large-scale logistic regression problems.
Year
Venue
Field
2018
arXiv: Machine Learning
Stochastic optimization,Stochastic gradient descent,Spark (mathematics),Computer science,Algorithm,Memory footprint,Variance reduction,Convex optimization,Speedup,Computation
DocType
Volume
Citations 
Journal
abs/1802.05811
0
PageRank 
References 
Authors
0.34
12
2
Name
Order
Citations
PageRank
Ashok Cutkosky101.01
Róbert Busa-Fekete2234.48