Title
Accelerating Sparse CNN Inference on GPUs with Performance-Aware Weight Pruning
Abstract
Weight pruning is a popular technique to reduce the size and computation complexity of the Convolutional Neural Networks (CNNs). Despite its success in reducing the model size, weight pruning has brought limited benefit to the CNN inference performance, due to the irregularity introduced in the sparse convolution operations. In this work, we aim to improve the performance of sparse convolutions on GPUs by mitigating the irregularity. We find that the existing performance optimization techniques for sparse matrix computations fail to accelerate sparse convolutions, and we observe that the main performance bottleneck is caused by the heavy control-flow instructions. Based on the observation, we proposed a new GEMM-based implementation of sparse convolutions. Our main idea is to extract dense blocks of non-zeros in the sparse convolution kernels, and use dense matrix-matrix multiplication for these dense blocks to achieve high throughput. For cases where many non-zero weights cannot be grouped into dense blocks, we propose a performance-aware re-pruning strategy that removes the least important weights in the sparse kernels to further improve the throughput. The experimental results with five real-world pruned CNN models show that our techniques can significantly improve the layer-wise performance of sparse convolution operations as well as the end-to-end performance of CNN inference.
Year
DOI
Venue
2020
10.1145/3410463.3414648
PACT '20: International Conference on Parallel Architectures and Compilation Techniques Virtual Event GA USA October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-8075-1
2
PageRank 
References 
Authors
0.37
0
4
Name
Order
Citations
PageRank
Masuma Akter Rumi120.37
Xiaolong Ma2225.90
Yanzhi Wang31082136.11
Peng Jiang421.04