Title
Filayer: A Novel Fine-Grained Layer-Wise Parallelism Strategy For Deep Neural Networks
Abstract
Data parallelism and model parallelism are regarded as two major parallelism strategies for deep neural networks (DNNs). However, the two methodologies achieve acceleration mainly by applying coarse-grained network-model-based parallelization. Neither methodology can fully tap into the potentials of the parallelism of network models and many-core systems (such as GPUs). In this work, we propose a novel fine-grained parallelism strategy based on layer-wise parallelization (named FiLayer), which includes inter-layer parallelism and intralayer parallelism. The former allows several adjacent layers in a network model to be processed in a pipelined manner. The latter divides the operations in one layer into several parts and processes them in parallel. CUDA streams are applied to realize the above fine-grained parallelisms. FiLayer is implemented by extending Caffe. Several typical datasets are used for the performance evaluation. The experimental results indicate that FiLayer can help Caffe achieve speedups of 1.58x-2.19x.
Year
DOI
Venue
2018
10.1007/978-3-030-01424-7_32
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT III
Keywords
Field
DocType
Deep learning, Fined-grained parallelism, CUDA stream
Computer science,CUDA,Caffè,Parallel computing,Data parallelism,Acceleration,Artificial intelligence,Deep learning,Deep neural networks,Machine learning,Network model
Conference
Volume
ISSN
Citations 
11141
0302-9743
0
PageRank 
References 
Authors
0.34
10
5
Name
Order
Citations
PageRank
Wenbin Jiang135536.55
Yangsong Zhang27511.65
Pai Liu310.68
Geyan Ye400.68
Hai Jin56544644.63