Abstract | ||
---|---|---|
Approximate computing emerges as a promising technique for high energy efficiency. Multi-layer perceptron (MLP) models can be used to approximate many modern applications, with little quality loss. However, the various MLP topologies limits the hardwares performance in all cases. In this paper, a scheduling framework is proposed to guide mapping MLPs onto limited hardware resources with high performance. We then design a reconfigurable neural architecture (RNA) to support the proposed scheduling framework. RNA can be reconfigured to accelerate different MLP topologies, and achieves higher performance than other MLP accelerators. |
Year | Venue | Field |
---|---|---|
2015 | 2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) | Architecture,Adder,Scheduling (computing),Computer science,Parallel computing,Electronic engineering,Network topology,Perceptron,Computer engineering,Benchmark (computing),High energy,Approximate computing |
DocType | ISSN | Citations |
Conference | 0271-4302 | 1 |
PageRank | References | Authors |
0.35 | 5 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Fengbin Tu | 1 | 71 | 8.62 |
shouyi yin | 2 | 579 | 99.95 |
Peng Ouyang | 3 | 129 | 19.36 |
leibo liu | 4 | 816 | 116.95 |
Shaojun Wei | 5 | 555 | 102.32 |