Title
One Weight Bitwidth to Rule Them All
Abstract
Weight quantization for deep ConvNets has shown promising results for applications such as image classification and semantic segmentation and is especially important for applications where memory storage is limited. However, when aiming for quantization without accuracy degradation, different tasks may end up with different bitwidths. This creates complexity for software and hardware support and the complexity accumulates when one considers mixed-precision quantization, in which case each layer's weights use a different bitwidth. Our key insight is that optimizing for the least bitwidth subject to no accuracy degradation is not necessarily an optimal strategy. This is because one cannot decide optimality between two bitwidths if one has a smaller model size while the other has better accuracy. In this work, we take the first step to understand if some weight bitwidth is better than others by aligning all to the same model size using a width-multiplier. Under this setting, somewhat surprisingly, we show that using a single bitwidth for the whole network can achieve better accuracy compared to mixed-precision quantization targeting zero accuracy degradation when both have the same model size. In particular, our results suggest that when the number of channels becomes a target hyperparameter, a single weight bitwidth throughout the network shows superior results for model compression.
Year
DOI
Venue
2020
10.1007/978-3-030-68238-5_7
ECCV Workshops
Keywords
DocType
Citations 
Model compression,Deep learning architectures,Quantization,ConvNets,Image classification
Conference
1
PageRank 
References 
Authors
0.35
1
4
Name
Order
Citations
PageRank
Ting-Wu Chin1265.66
Pierce I-Jen Chuang210.35
Vikas Chandra369159.76
Diana Marculescu42725223.87