Title
Dynamic Sharing in Multi-accelerators of Neural Networks on an FPGA Edge Device
Abstract
Edge computing can potentially provide abundant processing resources for compute-intensive applications while bringing services close to end devices. With the increasing demands for computing acceleration at the edge, FPGAs have been deployed to provide custom deep neural network accelerators. This paper explores a DNN accelerator sharing system at the edge FPGA device, that serves various DNN applications from multiple end devices simultaneously. The proposed SharedDNN/PlanAhead policy exploits the regularity among requests for various DNN accelerators and determines which accelerator to allocate for each request and in what order to respond to the requests that achieve maximum responsiveness for a queue of acceleration requests. Our results show overall 2. 20x performance gain at best and utilization improvement by reducing up to 27% of DNN library usage while staying within the requests’ requirements and resource constraints.
Year
DOI
Venue
2020
10.1109/ASAP49362.2020.00040
2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)
Keywords
DocType
ISSN
Acceleration Framework,Dynamic Sharing,Deep Neural Network,Edge Computing
Conference
2160-0511
ISBN
Citations 
PageRank 
978-1-7281-7279-8
0
0.34
References 
Authors
14
4
Name
Order
Citations
PageRank
Hsin-Yu Ting111.03
Tootiya Giyahchi200.34
Ardalan Amiri Sani314921.84
Elaheh Bozorgzadeh463037.93