Title
PipeDream: Fast and Efficient Pipeline Parallel DNN Training.
Abstract
PipeDream is a Deep Neural Network(DNN) training system for GPUs that parallelizes computation by pipelining execution across multiple machines. Its pipeline parallel computing model avoids the slowdowns faced by data-parallel training when large models and/or limited network bandwidth induce high communication-to-computation ratios. PipeDream reduces communication by up to 95% for large DNNs relative to data-parallel training, and allows perfect overlap of communication and computation. PipeDream keeps all available GPUs productive by systematically partitioning DNN layers among them to balance work and minimize communication, versions model parameters for backward pass correctness, and schedules the forward and backward passes of different inputs in round-robin fashion to optimize time to target accuracy. Experiments with five different DNNs on two different clusters show that PipeDream is up to 5x faster in time-to-accuracy compared to data-parallel training.
Year
Venue
Field
2018
arXiv: Distributed, Parallel, and Cluster Computing
Pipeline (computing),Computer science,Training system,Parallel computing,Correctness,Bandwidth (signal processing),Schedule,Artificial neural network,Computation,Distributed computing
DocType
Volume
Citations 
Journal
abs/1806.03377
10
PageRank 
References 
Authors
0.46
3
7
Name
Order
Citations
PageRank
Aaron Harlap1321.60
Deepak Narayanan2477.42
Amar Phanishayee380456.59
Vivek Seshadri499232.76
Nikhil R. Devanur5121795.84
Gregory R. Ganger64560383.16
Phillip B. Gibbons76863624.14