Title
Anonymous Model Pruning for Compressing Deep Neural Networks
Abstract
Many deep neural network compression algorithms need to fine-tune on source dataset, which makes them unpractical when the source datasets are unavailable. Although data-free methods can overcome this problem, they often suffer from a huge loss of accuracy. In this paper, we propose a novel approach named Anonymous-Model Pruning (AMP), which seeks to compress the network without the source data and the accuracy can be guaranteed without too much loss. AMP compresses deep neural networks via searching pruning rate automatically and fine-tuning the compressed model under the teacher-student diagram. The key innovations are that the pruning rate is automatically determined, and the fine-tuning process is under the guidance of uncompressed network instead of labels. Even without the source dataset, compared with existing pruning methods, our proposed method can still achieve comparable accuracy with similar pruning rate. For example, for ResNet50, our AMP method only incur 0.76% loss in top-1 accuracy with 32.72% pruning rate.
Year
DOI
Venue
2020
10.1109/MIPR49039.2020.00040
2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
Keywords
DocType
ISBN
network compression,knowledge distillation,pruning
Conference
978-1-7281-4273-9
Citations 
PageRank 
References 
0
0.34
18
Authors
8
Name
Order
Citations
PageRank
Lechun Zhang100.34
Guangyao Chen201.01
Yemin Shi3379.48
Quan Zhang400.68
Rui Tang518819.22
Yaowei Wang613429.62
Yonghong Tian71057102.81
Tiejun Huang81281120.48