Title
Scalable Multi-Fpga Acceleration For Large Rnns With Full Parallelism Levels
Abstract
The increasing size of recurrent neural networks (RNNs) makes it hard to meet the growing demand for real-time AI services. For low-latency RNN serving, FPGA-based accelerators can leverage specialized architectures with optimized dataflow. However, they also suffer from severe HW under-utilization when partitioning RNNs, and thus fail to obtain the scalable performance.In this paper, we identify the performance bottlenecks of existing RNN partitioning strategies. Then, we propose a novel RNN partitioning strategy to achieve the scalable multi-FPGA acceleration for large RNNs. First, we introduce three parallelism levels and exploit them by partitioning weight matrices, matrix/vector operations, and layers. Second, we examine the performance impact of collective communications and software pipelining to derive more accurate and optimal distribution results. We prototyped an FPGA-based acceleration system using multiple Intel high-end FPGAs, and our partitioning scheme allows up to 2.4x faster inference of modern RNN workloads than conventional partitioning methods.
Year
DOI
Venue
2020
10.1109/DAC18072.2020.9218528
PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC)
DocType
ISSN
Citations 
Conference
0738-100X
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Dongup Kwon1254.92
Suyeon Hur200.34
Hamin Jang300.34
Eriko Nurvitadhi439933.08
Jangwoo Kim544735.38