Title
Tunable Floating-Point for Energy Efficient Accelerators
Abstract
In this work, we address the design of an on-chip accelerator for Machine Learning and other computation-demanding applications with a Tunable Floating-Point (TFP) precision. The precision can be chosen for a single operation by selecting a specific number of bits for significand and exponent in the floating-point representation. By tuning the precision of a given algorithm to the minimum precision achieving an acceptable target error, we can make the computation more power efficient. We focus on floating-point multiplication, which is the most power demanding arithmetic operation.
Year
DOI
Venue
2018
10.1109/ARITH.2018.8464797
2018 IEEE 25th Symposium on Computer Arithmetic (ARITH)
Keywords
Field
DocType
on-chip accelerator,Machine Learning,computation-demanding applications,Tunable Floating-Point precision,floating-point representation,floating-point multiplication,energy efficient accelerators,TFP precision
System on a chip,Adder,Floating point,Efficient energy use,Computer science,Parallel computing,Electronic engineering,Multiplication,Decoding methods,Significand,Computation
Conference
ISSN
ISBN
Citations 
1063-6889
978-1-5386-2665-8
1
PageRank 
References 
Authors
0.39
2
1
Name
Order
Citations
PageRank
Alberto Nannarelli119020.41