Title
A CNN channel pruning low-bit framework using weight quantization with sparse group lasso regularization.
Abstract
The deployment of large-scale Convolutional Neural Networks (CNNs) in limited-power devices is hindered by their high computation cost and storage. In this paper, we propose a novel framework for CNNs to simultaneously achieve channel pruning and low-bit quantization by combining weight quantization with Sparse Group Lasso (SGL) regularization. We model this framework as a discretely constrained problem and solve it by Alternating Direction Method of Multipliers (ADMM). Different from previous approaches, the proposed method reduces not only model size but also computational operations. In experimental section, we evaluate the proposed framework on CIFAR datasets with several popular models such as VGG-7/16/19 and ResNet-18/34/50, which demonstrate that the proposed method can obtain low-bit networks and dramatically reduce redundant channels of the network with slight inference accuracy loss. Furthermore, we also visualize and analyze weight tensors, which showing the compact group-sparsity structure of them.
Year
DOI
Venue
2020
10.3233/JIFS-191014
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS
Keywords
DocType
Volume
Convolutional neural network (CNN),weight quantization,sparse group lasso (SGL),alternating direction method of multipliers (ADMM),channel pruning
Journal
39
Issue
ISSN
Citations 
1
1064-1246
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Xin Long100.68
Xiangrong Zeng2105.20
Yan Liu324173.08
Huaxin Xiao4228.41
Maojun Zhang531448.74
Zongcheng Ben600.34