Title
CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays
Abstract
Rapid development in deep neural networks (DNNs) is enabling many intelligent applications. However, on-chip training of DNNs is challenging due to the extensive computation and memory bandwidth requirements. To solve the bottleneck of the memory wall problem, compute-in-memory (CIM) approach exploits the analog computation along the bit line of the memory array thus significantly speeds up the ve...
Year
DOI
Venue
2020
10.1109/TC.2020.2980533
IEEE Transactions on Computers
Keywords
DocType
Volume
Training,Random access memory,Computer architecture,System-on-chip,Pipelines,Common Information Model (computing),Energy efficiency
Journal
69
Issue
ISSN
Citations 
7
0018-9340
7
PageRank 
References 
Authors
0.49
0
4
Name
Order
Citations
PageRank
Hongwu Jiang1166.77
Xiaochen Peng26112.17
Shanshi Huang3156.75
Shimeng Yu449056.22