Title
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy.
Abstract
Deep convolution neural network has achieved great success in many artificial intelligence applications. However, its enormous model size and massive computation cost have become the main obstacle for deployment of such powerful algorithm in the low power and resource limited embedded systems. As the countermeasure to this problem, in this work, we propose statistical weight scaling and residual expansion methods to reduce the bit-width of the whole network weight parameters to ternary values (i.e. −1, 0, +1), with the objectives to greatly reduce model size, computation cost and accuracy degradation caused by the model compression. With about 16X model compression rate, our ternarized ResNet-32/44/56 could outperforms full-precision counterparts by 0.12%, 0.24% and 0.18% on CIFAR-10 dataset. We also test our ternarization method with AlexNet and ResNet-18 on ImageNet dataset, which both achieve the best top-1 accuracy compared to recent similar works, with the same 16X compression rate. If further incorporating our residual expansion method, compared to the full-precision counterpart, our ternarized ResNet-18 even improves the top-5 accuracy by 0.61% and merely degrades the top-1 accuracy only by 0.42% for ImageNet dataset, with 8X model compression rate. It outperforms the recent ABC-Net by 1.03% in top-1 accuracy and 1.78% in top-5 accuracy, with around 1.25X higher compression rate and more than 6X computation reduction due to the weight sparsity.
Year
DOI
Venue
2019
10.1109/wacv.2019.00102
2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
Keywords
DocType
Volume
Computational modeling,Degradation,Training,Neural networks,Convolution,Image coding,Hardware
Conference
abs/1807.07948
ISSN
Citations 
PageRank 
2472-6737
3
0.40
References 
Authors
12
3
Name
Order
Citations
PageRank
Zhezhi He113625.37
Boqing Gong268533.29
Deliang Fan337553.66