Title
Elastic-DF: Scaling Performance of DNN Inference in FPGA Clouds through Automatic Partitioning
Abstract
AbstractCustomized compute acceleration in the datacenter is key to the wider roll-out of applications based on deep neural network (DNN) inference. In this article, we investigate how to maximize the performance and scalability of field-programmable gate array (FPGA)-based pipeline dataflow DNN inference accelerators (DFAs) automatically on computing infrastructures consisting of multi-die, network-connected FPGAs. We present Elastic-DF, a novel resource partitioning tool and associated FPGA runtime infrastructure that integrates with the DNN compiler FINN. Elastic-DF allocates FPGA resources to DNN layers and layers to individual FPGA dies to maximize the total performance of the multi-FPGA system. In the resulting Elastic-DF mapping, the accelerator may be instantiated multiple times, and each instance may be segmented across multiple FPGAs transparently, whereby the segments communicate peer-to-peer through 100 Gbps Ethernet FPGA infrastructure, without host involvement. When applied to ResNet-50, Elastic-DF provides a 44% latency decrease on Alveo U280. For MobileNetV1 on Alveo U200 and U280, Elastic-DF enables a 78% throughput increase, eliminating the performance difference between these cards and the larger Alveo U250. Elastic-DF also increases operating frequency in all our experiments, on average by over 20%. Elastic-DF therefore increases performance portability between different sizes of FPGA and increases the critical throughput per cost metric of datacenter inference.
Year
DOI
Venue
2022
10.1145/3470567
ACM Transactions on Reconfigurable Technology and Systems
Keywords
DocType
Volume
Deep neural networks, partitioning, distributed inference
Journal
15
Issue
ISSN
Citations 
2
1936-7406
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Tobias Alonso100.68
L PETRICA200.34
M RUIZ300.34
J PETRI-KOENIG400.34
Yaman Umuroglu518610.67