Title
Accelerating Deep Neural Networks with Spatial Bottleneck Modules.
Abstract
This paper presents an efficient module named spatial bottleneck for accelerating the convolutional layers in deep neural networks. The core idea is to decompose convolution into two stages, which first reduce the spatial resolution of the feature map, and then restore it to the desired size. This operation decreases the sampling density in the spatial domain, which is independent yet complementary to network acceleration approaches in the channel domain. Using different sampling rates, we can tradeoff between recognition accuracy and model complexity. As a basic building block, spatial bottleneck can be used to replace any single convolutional layer, or the combination of two convolutional layers. We empirically verify the effectiveness of spatial bottleneck by applying it to the deep residual networks. Spatial bottleneck achieves 2x and 1.4x speedup on the regular and channel-bottlenecked residual blocks, respectively, with the accuracies retained in recognizing low-resolution images, and even improved in recognizing high-resolution images.
Year
Venue
Field
2018
arXiv: Computer Vision and Pattern Recognition
Bottleneck,Residual,Pattern recognition,Convolution,Computer science,Communication channel,Sampling (statistics),Acceleration,Artificial intelligence,Image resolution,Speedup
DocType
Volume
Citations 
Journal
abs/1809.02601
1
PageRank 
References 
Authors
0.35
21
5
Name
Order
Citations
PageRank
Junran Peng182.52
Ling-Xi Xie242937.79
Zhaoxiang Zhang3102299.76
Tieniu Tan411681744.35
Jingdong Wang54198156.76