Title
Weak sub-network pruning for strong and efficient neural networks
Abstract
Pruning methods to compress and accelerate deep convolutional neural networks (CNNs) have recently attracted growing attention, with the view of deploying pruned networks on resource-constrained hardware devices. However, most existing methods focus on small granularities, such as weight, kernel and filter, for the exploration of pruning. Thus, it will be bound to iteratively prune the whole neural networks based on those small granularities for high compression ratio with little performance loss. To address these issues, we theoretically analyze the relationship between the activation and gradient sparsity, and the channel saliency. Based on our findings, we propose a novel and effective method of weak sub-network pruning (WSP). Specifically, for a well-trained network model, we divide the whole compression process into two non-iterative stages. The first stage is to directly obtain a strong sub-network by pruning the weakest one. We first identify the less important channels from all the layers and determine the weakest sub-network, whereby each selected channel makes a minimal contribution to both the feed-forward and feed-backward processes. Then, a one-shot pruning strategy is executed to form a strong sub-network enabling fine tuning, while significantly reducing the impact of the network depth and width on the compression efficiency, especially for deep and wide network architectures. The second stage is to globally fine-tune the strong sub-network using several epochs to restore its original recognition accuracy. Furthermore, our proposed method impacts on the fully-connected layers as well as the convolutional layers for simultaneous compression and acceleration. Comprehensive experiments on VGG16 and ResNet-50 involving a variety of popular benchmarks, such as ImageNet-1K, CIFAR-10, CUB-200 and PASCAL VOC, demonstrate that our WSP method achieves superior performance on classification, domain adaption and object detection tasks with small model size. Our source code is available at https://github.com/QingbeiGuo/WSP.git.
Year
DOI
Venue
2021
10.1016/j.neunet.2021.09.015
Neural Networks
Keywords
DocType
Volume
Deep neural network,Weak sub-network pruning,Compression,Acceleration
Journal
144
Issue
ISSN
Citations 
1
0893-6080
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Qingbei Guo111.45
Xiaojun Wu235652.89
J. Kittler3143461465.03
Zhiquan Feng43613.73