Title
ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs.
Abstract
Weight pruning methods of deep neural networks have been demonstrated to achieve a good model pruning ratio without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, the pruning ratio and GPU acceleration are limited when accuracy needs to be maintained. In this work, we overcome pruning ratio and GPU acceleration limitations by proposing a unified, systematic framework of structured weight pruning for DNNs, named ADAM-ADMM. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. A significant improvement in structured weight pruning ratio is achieved without loss of accuracy, along with fast convergence rate. With a small sparsity degree of 33.3% on the convolutional layers, we achieve 1.64% accuracy enhancement for the AlexNet model. This is obtained by mitigation of overfitting. Without loss of accuracy on the AlexNet model, we achieve 2.58x and 3.65x average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 2.77x and 7.5x when allowing a moderate accuracy loss of 2%. In this case the model compression for convolutional layers is 13.2x, corresponding to 10.5x CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework. Our models and codes are released at this https URL
Year
Venue
Field
2018
arXiv: Neural and Evolutionary Computing
Stochastic gradient descent,Computer science,Algorithm,Regularization (mathematics),Artificial intelligence,Acceleration,Rate of convergence,Overfitting,Machine learning,Pruning,Speedup,Computation
DocType
Volume
Citations 
Journal
abs/1807.11091
6
PageRank 
References 
Authors
0.46
0
9
Name
Order
Citations
PageRank
Tianyun Zhang1316.42
Kaiqi Zhang2197.11
Shaokai Ye3386.53
Jiayu Li4313.09
Jian Tang5109574.34
Wujie Wen630030.61
Xue Lin7154.67
Makan Fardad854741.98
Yanzhi Wang91082136.11