Title | ||
---|---|---|
Automated Neural Network Accelerator Generation Framework for Multiple Neural Network Applications |
Abstract | ||
---|---|---|
Neural networks are widely used in various applications, but general neural network accelerators support only one application at a time. Therefore, information for each application, such as synaptic weights and bias data, must be loaded quickly to use multiple neural network applications. Field-programmable gate array (FPGA)-based implementation has huge performance overhead owing to low data transmission bandwidth. In order to solve this problem, this paper presents an automated FPGA-based multi-neural network accelerator generation framework that can quickly support several applications by storing neural network application data in an on-chip memory inside the FPGA. To do this, we first design a shared custom hardware accelerator that can support rapid changes in multiple target neural network applications. Then, we introduce an automated multi-neural network accelerator generation framework that performs training, weight quantization, and neural accelerator synthesis. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/tencon.2018.8650190 | TENCON IEEE Region 10 Conference Proceedings |
Keywords | Field | DocType |
neural network,accelerator,FPGA | System on a chip,Data transmission,Computer science,Custom hardware,Field-programmable gate array,Electronic engineering,Gate array,Bandwidth (signal processing),Computer hardware,Artificial neural network,Quantization (signal processing) | Conference |
ISSN | Citations | PageRank |
2159-3442 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Inho Lee | 1 | 1 | 1.72 |
Seongmin Hong | 2 | 0 | 1.35 |
Giha Ryu | 3 | 0 | 0.34 |
Yongjun Park | 4 | 277 | 20.15 |