Abstract | ||
---|---|---|
Convolutional Neural Network (CNN) has become the state-of-the-art algorithm for many computer vision tasks. But its high computation complexity and high memory complexity makes it hard to be deployed on traditional platforms like CPUs. Memory energy can take up a large part of the system energy, which limits the energy efficiency of CNN processing. The emerging metal-oxide resistive switching random-access memory (RRAM) has been widely studied because of its good properties like high storage density and the compatibility with CMOS. In this paper, a system level energy analysis of using RRAM as on-chip weight buffer is carried out for a typical CNN accelerator. Hardware and scheduling optimizations are proposed to fully utilize the large RAM and avoid high read/write energy overhead. Experimental results show that RRAM based designs save 12-18% system energy with 15-75% smaller on-chip RAM area compared with SRAM designs. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ISVLSI.2018.00085 | 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) |
Keywords | Field | DocType |
RRAM,Convolutional Neural Network,Hardware Accelerator | System on a chip,Data transmission,Efficient energy use,High memory,Computer science,Static random-access memory,CMOS,Bandwidth (signal processing),Embedded system,Resistive random-access memory | Conference |
ISSN | ISBN | Citations |
2159-3469 | 978-1-5386-7100-9 | 1 |
PageRank | References | Authors |
0.37 | 4 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kaiyuan Guo | 1 | 332 | 19.19 |
Jincheng Yu | 2 | 315 | 19.49 |
Xuefei Ning | 3 | 25 | 6.37 |
Yiming Hu | 4 | 639 | 44.91 |
Yu Wang | 5 | 2279 | 211.60 |
Huazhong Yang | 6 | 2239 | 214.90 |