Title
Optimizing Grouped Convolutions on Edge Devices
Abstract
When deploying a deep neural network on con-strained hardware, it is possible to replace the network’s standard convolutions with grouped convolutions. This allows for substantial memory savings with minimal loss of accuracy. However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed. In this paper we propose Grouped Spatial Pack Convolutions (GSPC), a new implementation of grouped convolutions that outperforms existing solutions. We implement GSPC in TVM, which provides state-of-the-art performance on edge devices. We analyze a set of networks utilizing different types of grouped convolutions and evaluate their performance in terms of inference time on several edge devices. We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by $3.4\times, 8\times$ and $ 4\times$ on average respectively. Code is available at https://github.com/gecLAB/tvm-GSPC/
Year
DOI
Venue
2020
10.1109/ASAP49362.2020.00039
2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)
DocType
ISSN
ISBN
Conference
2160-0511
978-1-7281-7279-8
Citations 
PageRank 
References 
1
0.34
8
Authors
6
Name
Order
Citations
PageRank
Perry Gibson120.70
José Cano2123.32
Jack Turner311.70
Elliot J. Crowley411.36
Michael O'Boyle540519.81
Amos J. Storkey611.36