Title | ||
---|---|---|
Training and Inference using Approximate Floating-Point Arithmetic for Energy Efficient Spiking Neural Network Processors |
Abstract | ||
---|---|---|
This paper presents a systematic analysis of spiking neural network (SNN) performance with reduced computation precisions using approximate adders. We propose an IEEE 754-based approximate floating-point adder that applies to the leaky integrate-and-fire (LIF) neuron-based SNN operation for both training and inference. The experimental results under a two-layer SNN for MNIST handwritten digit recognition application show that 4-bit exact mantissa adder with 19-bit approximation for lower-part OR adder (LOA), instead of 23-bit full-precision mantissa adder, can be exploited to maintain good classification accuracy. When adopted LOA as mantissa adder, it can achieve up to 74.1% and 96.5% of power and energy saving, respectively. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICEIC51217.2021.9369724 | 2021 International Conference on Electronics, Information, and Communication (ICEIC) |
Keywords | DocType | ISBN |
spiking neural network (SNN),leaky integrate-and-fire (LIF) neuron,approximate adder,floating-point arithmetic | Conference | 978-1-7281-9162-1 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Myeongjin Kwak | 1 | 0 | 0.34 |
Jungwon Lee | 2 | 890 | 95.15 |
Hyoju Seo | 3 | 0 | 0.34 |
Mingyu Sung | 4 | 0 | 0.34 |
Yong Tae Kim | 5 | 22 | 8.62 |