Title
Petuum: A New Platform for Distributed Machine Learning on Big Data.
Abstract
How can one build a distributed framework that allows efficient deployment of a wide spectrum of modern advanced machine learning (ML) programs for industrial-scale problems using Big Models (100s of billions of parameters) on Big Data (terabytes or petabytes)- Contemporary parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized operators relying on graphical representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of different ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by leveraging several fundamental properties underlying ML programs that make them different from conventional operation-centric programs: error tolerance, dynamic structure, and nonuniform convergence; all stem from the optimization-centric nature shared in ML programs' mathematical definitions, and the iterative-convergent behavior of their algorithmic solutions. These properties present unique opportunities for an integrative system design, built on bounded-latency network synchronization and dynamic load-balancing scheduling, which is efficient, programmable, and enjoys provable correctness guarantees. We demonstrate how such a design in light of ML-first principles leads to significant performance improvements versus well-known implementations of several ML programs, allowing them to run in much less time and at considerably larger model sizes, on modestly-sized computer clusters.
Year
DOI
Venue
2015
10.1145/2783258.2783323
ACM Knowledge Discovery and Data Mining
Keywords
Field
DocType
Data models,Computational modeling,Big data,Servers,Convergence,Mathematical model,Synchronization
Data mining,Data modeling,Scheduling (computing),Computer science,Server,Systems design,Artificial intelligence,Distributed computing,Synchronization,Data parallelism,Dynamic priority scheduling,Big data,Machine learning
Journal
Volume
Issue
Citations 
1
2
106
PageRank 
References 
Authors
2.53
28
10
Search Limit
100106
Name
Order
Citations
PageRank
Bo Xing17332471.43
Ho, Qirong263630.75
Wei Dai333312.77
Jin Kyu Kim443417.53
Jinliang Wei530410.86
Lee, Seunghak61223.18
Xun Zheng721510.72
Pengtao Xie833922.63
Abhimanu Kumar92279.76
Yaoliang Yu1066934.33