Title
HMC-TRAN: A Tensor-core Inspired Hierarchical Model Compression for Transformer-based DNNs on GPU
Abstract
ABSTRACTAlthough Transformer-based deep learning models have been widely used in many natural language processing (NLP) tasks as well as computer vision, they suffer from gigantic model size and long latency. Network pruning can reduce the computational cost and model size. However, existing works mainly focus on irregular(sparse) pruning, which often causes irregular computations and extra indices per remained weight. In this work, we propose a Tensor-core inspired hierarchical model compression method to push the performance limit on modern GPUs. We present two modes of the two-step process. In the first mode, we use the Tensor-core aware block-based weight pruning method to exploit model sparsity in a coarse-grained manner and then use low-rank [33] decomposition to further reduce the weight storage in a fine-grained manner.In the second mode, we first use irregular pruning to achieve a highly sparse model and then apply the Tensor-core aware weight constraint on the sparse model to decompose the sparse matrix to several smaller but Tensor-core friendly sub-matrices. Experiments on Transformer, BERTBASE models show the proposed method outperforms the state-of-the-art.
Year
DOI
Venue
2021
10.1145/3453688.3461740
GLSVLSI
DocType
Citations 
PageRank 
Conference
1
0.37
References 
Authors
0
10
Name
Order
Citations
PageRank
Shaoyi Huang122.44
Shiyang Chen211.73
Hongwu Peng321.76
Daniel Manu411.05
Zhenglun Kong542.77
Geng Yuan673.56
Lei Yang711.39
Shusen Wang810.37
Hang Liu9274.94
Caiwen Ding1014226.52