Title
On Compressing Deep Models by Low Rank and Sparse Decomposition
Abstract
Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models. Low-rank approximation and pruning for sparse structures play a vital role in many compression works. However, weight filters tend to be both low-rank and sparse. Neglecting either part of these structure information in previous methods results in iteratively retraining, compromising accuracy, and low compression rates. Here we propose a unified framework integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions. Our model includes methods like pruning connections as special cases, and is optimized by a fast SVD-free algorithm. It has been theoretically proven that, with a small sample, due to its generalizability, our model can well reconstruct the feature maps on both training and test data, which results in less compromising accuracy prior to the subsequent retraining. With such a warm start to retrain, the compression method always possesses several merits: (a) higher compression rates, (b) little loss of accuracy, and (c) fewer rounds to compress deep models. The experimental results on several popular models such as AlexNet, VGG-16, and GoogLeNet show that our model can significantly reduce the parameters for both convolutional and fully-connected layers. As a result, our model reduces the size of VGG-16 by 15×, better than other recent compression methods that use a single strategy.
Year
DOI
Venue
2017
10.1109/CVPR.2017.15
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
Field
DocType
sparse decomposition,deep compression,feature maps,deep learning models,weight filters,low compression rates,weight matrices,feature map reconstructions,fast SVD-free algorithm,compression method,higher compression rates,deep models,low rank decomposition,SVD-free algorithm,pruning connections
Iterative reconstruction,Data modeling,Pattern recognition,Computer science,Matrix decomposition,Sparse approximation,Redundancy (engineering),Test data,Artificial intelligence,Deep learning,Sparse matrix
Conference
Volume
Issue
ISSN
2017
1
1063-6919
ISBN
Citations 
PageRank 
978-1-5386-0458-8
30
0.90
References 
Authors
17
4
Name
Order
Citations
PageRank
Xiyu Yu1322.28
Tongliang Liu290247.13
Xinchao Wang347443.70
Dacheng Tao419032747.78