Title | ||
---|---|---|
CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays |
Abstract | ||
---|---|---|
Rapid development in deep neural networks (DNNs) is enabling many intelligent applications. However, on-chip training of DNNs is challenging due to the extensive computation and memory bandwidth requirements. To solve the bottleneck of the memory wall problem, compute-in-memory (CIM) approach exploits the analog computation along the bit line of the memory array thus significantly speeds up the ve... |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/TC.2020.2980533 | IEEE Transactions on Computers |
Keywords | DocType | Volume |
Training,Random access memory,Computer architecture,System-on-chip,Pipelines,Common Information Model (computing),Energy efficiency | Journal | 69 |
Issue | ISSN | Citations |
7 | 0018-9340 | 7 |
PageRank | References | Authors |
0.49 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hongwu Jiang | 1 | 16 | 6.77 |
Xiaochen Peng | 2 | 61 | 12.17 |
Shanshi Huang | 3 | 15 | 6.75 |
Shimeng Yu | 4 | 490 | 56.22 |