Title
Weight-Dependent Gates for Differentiable Neural Network Pruning.
Abstract
In this paper, we propose a simple and effective network pruning framework, which introduces novel weight-dependent gates to prune filter adaptively. We argue that the pruning decision should depend on the convolutional weights, in other words, it should be a learnable function of filter weights. We thus construct the weight-dependent gates (W-Gates) to learn the information from filter weights and obtain binary filter gates to prune or keep the filters automatically. To prune the network under hardware constraint, we train a Latency Predict Net (LPNet) to estimate the hardware latency of candidate pruned networks. Based on the proposed LPNet, we can optimize W-Gates and the pruning ratio of each layer under latency constraint. The whole framework is differentiable and can be optimized by gradient-based method to achieve a compact network with better trade-off between accuracy and efficiency. We have demonstrated the effectiveness of our method on Resnet34 and Resnet50, achieving up to 1.33/1.28 higher Top-1 accuracy with lower hardware latency on ImageNet. Compared with state-of-the-art pruning methods, our method achieves superior performance(This work is done when Yun Li, Weiqun Wu and Zechun Liu are interns at Megvii Inc (Face++)).
Year
DOI
Venue
2020
10.1007/978-3-030-68238-5_3
ECCV Workshops
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Yun Li101.01
Weiqun Wu200.68
Zechun Liu3165.27
C. Zhang49012.48
Xiangyu Zhang513044437.66
Haotian Yao600.34
Baoqun Yin782.83