Title
Layer Importance Estimation with Imprinting for Neural Network Quantization
Abstract
Neural network quantization has achieved a high compression rate using fixed low bit-width representation of weights and activations while maintaining the accuracy of the high-precision original network. However, mixed precision (per-layer bit-width precision) quantization requires careful tuning to maintain accuracy while achieving further compression and higher granularity than fixed-precision quantization. We propose an accuracy-aware criterion to quantify the layer's importance rank. Our method applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way. We rank the layers based on the accuracy gain from previous modules and iteratively quantize first those with less accuracy gain. Previous mixed-precision methods either rely on expensive search techniques such as reinforcement learning (RL) or end-to-end optimization with a lack of interpretation to the quantization configuration scheme. Our method is a one-shot, efficient, accuracy-aware information estimation and thus draws better interpretability to the selected bit-width configuration.
Year
DOI
Venue
2021
10.1109/CVPRW53098.2021.00273
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGITION WORKSHOPS (CVPRW 2021)
DocType
ISSN
Citations 
Conference
2160-7508
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Hongyang Liu100.34
Sara Elkerdawy200.34
Ray Nilanjan354155.39
Mostafa Elhoushi401.35