Title
Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks
Abstract
As recent neural networks are being improved to be more accurate, their model's size is exponentially growing. Thus, a huge number of parameters requires to be loaded and stored from/in memory hierarchy and computed in processors to perform training or inference phase of neural network processing. Increasing the number of parameters causes a big challenge for real-time deployment since the memory bandwidth improvement's trend cannot keep up with models' complexity growing trend. Although some operations in neural networks processing are computational intensive such as convolutional layer computing, computing dense layers face with memory bandwidth bottleneck. To address the issue, the paper has proposed Partition Pruning for dense layers to reduce the required parameters while taking into consideration parallelization. We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7.72× speedup of performance and a 2.73× reduction in the energy used for computing pruned fully connected layers in TinyVGG16 model in comparison to running the unpruned model on a single accelerator. Besides, our method showed a limited reduction in accuracy while partitioning fully connected layers.
Year
DOI
Venue
2020
10.1109/PDP50117.2020.00053
2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)
Keywords
DocType
ISSN
Parallelization,Deep Neural Network,Pruning,Partitioning,Hardware Accelerator
Conference
1066-6192
ISBN
Citations 
PageRank 
978-1-7281-6583-7
0
0.34
References 
Authors
4
4
Name
Order
Citations
PageRank
Sina Shahhosseini100.34
Ahmad Albaqsami211.04
Masoomeh Jasemi300.34
Nader Bagherzadeh41674182.54