Title
Exact Neural Networks from Inexact Multipliers via Fibonacci Weight Encoding
Abstract
Edge devices must support computationally demanding algorithms, such as neural networks, within tight area/energy budgets. While approximate computing may alleviate these constraints, limiting induced errors remains an open challenge. In this paper, we propose a hardware/software co-design solution via an inexact multiplier, reducing area/power-delay-product requirements by 73/43%, respectively, while still computing exact results when one input is a Fibonacci encoded value. We introduce a retraining strategy to quantize neural network weights to Fibonacci encoded values, ensuring exact computation during inference. We benchmark our strategy on Squeezenet 1.0, DenseNet-121, and ResNet-18, measuring accuracy degradations of only 0.4/1.1/1.7%.
Year
DOI
Venue
2021
10.1109/DAC18074.2021.9586245
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC)
Keywords
DocType
ISSN
neural networks, quantization, accelerators, approximate computing
Conference
0738-100X
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
william e simon1224.67
Valérian Ray200.34
A. Levisse3258.74
Giovanni Ansaloni49815.78
marina zapater55410.70
D. Atienza618224.26