Abstract | ||
---|---|---|
The increasing size of recurrent neural networks (RNNs) makes it hard to meet the growing demand for real-time AI services. For low-latency RNN serving, FPGA-based accelerators can leverage specialized architectures with optimized dataflow. However, they also suffer from severe HW under-utilization when partitioning RNNs, and thus fail to obtain the scalable performance.In this paper, we identify the performance bottlenecks of existing RNN partitioning strategies. Then, we propose a novel RNN partitioning strategy to achieve the scalable multi-FPGA acceleration for large RNNs. First, we introduce three parallelism levels and exploit them by partitioning weight matrices, matrix/vector operations, and layers. Second, we examine the performance impact of collective communications and software pipelining to derive more accurate and optimal distribution results. We prototyped an FPGA-based acceleration system using multiple Intel high-end FPGAs, and our partitioning scheme allows up to 2.4x faster inference of modern RNN workloads than conventional partitioning methods. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/DAC18072.2020.9218528 | PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC) |
DocType | ISSN | Citations |
Conference | 0738-100X | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dongup Kwon | 1 | 25 | 4.92 |
Suyeon Hur | 2 | 0 | 0.34 |
Hamin Jang | 3 | 0 | 0.34 |
Eriko Nurvitadhi | 4 | 399 | 33.08 |
Jangwoo Kim | 5 | 447 | 35.38 |