Abstract | ||
---|---|---|
Emerging mobile deep neural networks (DNNs) are prevalent in resource-limited devices because of their shrunken parameter size and reduced computation. However, when deploying mobile DNNs on conventional DNN accelerators, there will be a performance loss due to the reduction of processing element (PE) utilization. Conventional accelerators are generally designed base on a fixed pattern of data reuse, but the data reuse opportunities of different layers in mobile DNNs are diverse. To process mobile DNNs with high performance, we propose an architecture called MoNA (Mobile Neural Architecture), which includes a flexible dataflow and a reconfigurable computing core. The dataflow efficiently supports reconfigurable processing parallelism to maximize the PE utilization of mobile DNNs. The computing core is a 3D PE array, which supports the dataflow and avoids high bandwidth requirement. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/NEWCAS44328.2019.8961273 | 2019 17th IEEE International New Circuits and Systems Conference (NEWCAS) |
Keywords | Field | DocType |
MoNA,mobile neural architecture,mobile deep neural networks,mobile DNNs,DNN accelerators,processing element utilization,conventional accelerators,reconfigurable computing core,reconfigurable processing parallelism | Computer architecture,Architecture,Computer science,Electronic engineering,Dataflow,Processing element,Deep neural networks,Reconfigurable computing,Data reuse,Computation,High bandwidth | Conference |
ISSN | ISBN | Citations |
2472-467X | 978-1-7281-1032-5 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Weiwei Wu | 1 | 0 | 0.68 |
shouyi yin | 2 | 579 | 99.95 |
Fengbin Tu | 3 | 71 | 8.62 |
leibo liu | 4 | 816 | 116.95 |
Shaojun Wei | 5 | 555 | 102.32 |