Title
Sparse Convolutional Neural Networks
Abstract
Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.
Year
DOI
Venue
2015
10.1109/CVPR.2015.7298681
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
Field
DocType
sparse decomposition,sparse matrix multiplication algorithm,sparse convolutional neural networks,SCNN model,object detection problem,cascade model,sparse fully connected layers
Object detection,Pattern recognition,Computer science,Convolutional neural network,Sparse approximation,Redundancy (engineering),Artificial intelligence,Contextual image classification,Sparse matrix,Computational complexity theory,Speedup
Conference
Volume
Issue
ISSN
2015
1
1063-6919
Citations 
PageRank 
References 
102
2.78
21
Authors
5
Search Limit
100102
Name
Order
Citations
PageRank
Baoyuan Liu11325.64
Min Wang216936.41
Hassan Foroosh374859.98
Marshall F. Tappen4190189.34
Marianna Pensky51617.94