Title
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Abstract
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on models and algorithmic optimizations of CNN, little attention has been paid to the efficient implementation of these algorithms on embedded CPUs, which usually have very limited memory and low power budget. This paper aims to fill this gap and focuses on the efficient implementation of Winograd or Cook-Toom based convolution on modern Arm Cortex-A CPUs, widely used in mobile devices today. Specifically, we demonstrate a reduction in inference latency by using a set of optimization strategies that improve the utilization of computational resources, and by effectively leveraging the ARMv8-A NEON SIMD instruction set. We evaluated our proposed region-wise multi-channel implementations on Arm Cortex-A73 platform using several representative CNNs. The results show significant performance improvements in full network, up to 60%, over existing im2row/im2col based optimization techniques.
Year
DOI
Venue
2019
10.1109/EMC249363.2019.00008
2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC2)
Keywords
DocType
Volume
CNN, Winograd, Cook-Toom, Embedded CPU
Journal
abs/1903.01521
ISBN
Citations 
PageRank 
978-1-7281-6764-0
1
0.35
References 
Authors
4
6
Name
Order
Citations
PageRank
Partha Maji1192.38
Andrew Mundy210.35
Ganesh S. Dasika338724.30
Jesse G. Beu423.41
Matthew Mattina544128.63
robert mullins68411.28