Abstract | ||
---|---|---|
We propose a novel computation-in-memory (CIM) architecture based on DRAM for binary neural network, in which a novel charge sharing circuit enables us to perform all logic operations and accumulation inside sub-array at a very small area overhead (1.22%). Especially, the in-DRAM accumulation can significantly reduce off-chip DRAM accesses. Our experiments show that, on VGG-9 model for CIFAR-10, our proposed method, realized on DDR4 DRAM, gives 2.56 times smaller latency per image and 19.57 times lower energy consumption in off-chip data transfer than the existing methods, modified Ambit and DRISA, at a very small accuracy loss (0.23%). |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/COOLCHIPS49199.2020.9097642 | 2020 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS) |
Keywords | DocType | ISSN |
Neural network,BNN,accelerator,computation in memory,CIM,DRAM | Conference | 2167-9657 |
ISBN | Citations | PageRank |
978-1-7281-6348-2 | 0 | 0.34 |
References | Authors | |
3 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
haerang choi | 1 | 5 | 1.14 |
Yosep Lee | 2 | 0 | 0.34 |
Jae-Joon Kim | 3 | 31 | 8.39 |
Sungjoo Yoo | 4 | 1398 | 96.56 |