Title
Communication Efficient Distributed Machine Learning with the Parameter Server.
Abstract
This paper describes a third-generation parameter server framework for distributed machine learning. This framework offers two relaxations to balance system performance and algorithm efficiency. We propose a new algorithm that takes advantage of this framework to solve non-convex non-smooth problems with convergence guarantees. We present an in-depth analysis of two large scale machine learning problems ranging from l(1)-regularized logistic regression on CPUs to reconstruction ICA on GPUs, using 636TB of real data with hundreds of billions of samples and dimensions. We demonstrate using these examples that the parameter server framework is an effective and straightforward way to scale machine learning to larger problems and systems than have been previously achieved.
Year
Venue
Field
2014
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014)
Convergence (routing),Algorithmic efficiency,Stability (learning theory),Active learning (machine learning),Computer science,Wake-sleep algorithm,Theoretical computer science,Ranging,Artificial intelligence,Logistic regression,Machine learning
DocType
Volume
ISSN
Conference
27
1049-5258
Citations 
PageRank 
References 
64
3.16
18
Authors
4
Name
Order
Citations
PageRank
Mu Li191342.35
andersen david g24823345.31
Alexander J. Smola3196271967.09
Yu, Kai44799255.21