Title
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis.
Abstract
Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.
Year
DOI
Venue
2018
10.1145/3320060
ACM Computing Surveys (CSUR)
Keywords
Field
DocType
Deep learning, distributed computing, parallel algorithms
Asynchronous communication,Stochastic gradient descent,Concurrency,Parallel computing,Circuit design,Distributed algorithm,Artificial intelligence,Deep learning,Machine learning,Deep neural networks,Mathematics
Journal
Volume
Issue
ISSN
abs/1802.09941
4
0360-0300
Citations 
PageRank 
References 
46
1.44
168
Authors
2
Search Limit
100168
Name
Order
Citations
PageRank
Tal Ben-Nun111614.21
Torsten Hoefler22197163.64