Title
Data Reuse for Accelerated Approximate Warps
Abstract
Many data-driven applications, including computer vision, machine learning, speech recognition, and medical diagnostics show tolerance to computation error. These applications are often accelerated on GPUs, but the performance improvements require high energy usage. In this article, we present DRAAW, an approximate computing technique capable of accelerating GPGPU applications at a warp level. In GPUs, warps are groups of threads which issued together across multiple cores. The slowest thread dictates the pace of the warp, so DRAAW identifies these bottlenecks and avoids them during approximation. We alleviate computation costs by using an approximate lookup table which tracks recent operations and reuses them to exploit temporal locality within applications. To improve neural network performance, we propose neuron aware approximation, a technique which profiles operations within network layers and automatically configures DRAAW to ensure computations with more impact on the output accuracy are subject to less approximation. We evaluate our design by placing DRAAW within each core of an Nvidia Kepler Architecture Titan. DRAAW improves throughput by up to 2.8× and improves energy-delay product (EDP) by 5.6× for six GPGPU applications while maintaining less than 5% output error. We show neuron aware approximation accelerates the inference of six neutral networks by 2.9× and improves EDP by 6.2× with less than 1% impact on prediction accuracy.
Year
DOI
Venue
2020
10.1109/TCAD.2020.2986128
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Keywords
DocType
Volume
Approximate computing,energy efficiency,floating-point unit (FPU),GPU,warps
Journal
39
Issue
ISSN
Citations 
12
0278-0070
1
PageRank 
References 
Authors
0.35
0
5
Name
Order
Citations
PageRank
Daniel Peroni1153.29
Mohsen Imani234148.13
Hamid Nejatollahi3225.02
Nikil Dutt44960421.49
Tajana Simunic53198266.23