Title
Large-scale stochastic learning using GPUs
Abstract
In this work we propose an accelerated stochastic learning system for very large-scale applications. Acceleration is achieved by mapping the training algorithm onto massively parallel processors: we demonstrate a parallel, asynchronous GPU implementation of the widely used stochastic coordinate descent/ascent algorithm that can provide up to 35× speed-up over a sequential CPU implementation. In order to train on very large datasets that do not fit inside the memory of a single GPU, we then consider techniques for distributed stochastic learning. We propose a novel method for optimally aggregating model updates from worker nodes when the training data is distributed either by example or by feature. Using this technique, we demonstrate that one can scale out stochastic learning across up to 8 worker nodes without any significant loss of training time. Finally, we combine GPU acceleration with the optimized distributed method to train on a dataset consisting of 200 million training examples and 75 million features. We show by scaling out across 4 GPUs, one can attain a high degree of training accuracy in around 4 seconds: a 20× speed-up in training time compared to a multi-threaded, distributed implementation across 4 CPUs.
Year
DOI
Venue
2017
10.1109/IPDPSW.2017.140
2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Keywords
DocType
Volume
Machine Learning,GPU,Distributed Computing,Asynchronous Learning
Conference
abs/1702.07005
ISSN
ISBN
Citations 
2164-7062
978-1-5386-3409-7
1
PageRank 
References 
Authors
0.35
13
5
Name
Order
Citations
PageRank
Thomas P. Parnell172.09
Celestine Dunner2126.99
Kubilay Atasu341626.73
Manolis Sifalakis421.37
Haris Pozidis521.04