Title
The Effects of Approximate Multiplication on Convolutional Neural Networks
Abstract
This article analyzes the effects of approximate multiplication when performing inferences on deep convolutional neural networks (CNNs). The approximate multiplication can reduce the cost of the underlying circuits so that CNN inferences can be performed more efficiently in hardware accelerators. The study identifies the critical factors in the convolution, fully-connected, and batch normalization layers that allow more accurate CNN predictions despite the errors from approximate multiplication. The same factors also provide an arithmetic explanation of why bfloat16 multiplication performs well on CNNs. The experiments are performed with recognized network architectures to show that the approximate multipliers can produce predictions that are nearly as accurate as the FP32 references, without additional training. For example, the ResNet and Inception-v4 models with Mitch- <inline-formula><tex-math notation="LaTeX">$w$</tex-math></inline-formula> 6 multiplication produces Top-5 errors that are within 0.2 percent compared to the FP32 references. A brief cost comparison of Mitch- <inline-formula><tex-math notation="LaTeX">$w$</tex-math></inline-formula> 6 against bfloat16 is presented where a MAC operation saves up to 80 percent of energy compared to the bfloat16 arithmetic. The most far-reaching contribution of this article is the analytical justification that multiplications can be approximated while additions need to be exact in CNN MAC operations.
Year
DOI
Venue
2022
10.1109/TETC.2021.3050989
IEEE Transactions on Emerging Topics in Computing
Keywords
DocType
Volume
Machine learning,computer vision,object recognition,arithmetic and logic units,low-power design
Journal
10
Issue
ISSN
Citations 
2
2168-6750
0
PageRank 
References 
Authors
0.34
13
4
Name
Order
Citations
PageRank
MIN SOO KIM18216.71
Alberto A. Del Barrio27814.49
HyunJin Kim300.34
Nader Bagherzadeh41674182.54