Title
An 89tops/W And 16.3tops/Mm(2) All-Digital Sram-Based Full-Precision Compute-In Memory Macro In 22nm For Machine-Learning Edge Applications
Abstract
From the cloud to edge devices, artificial intelligence (AI) and machine learning (ML) are widely used in many cognitive tasks, such as image classification and speech recognition. In recent years, research on hardware accelerators for AI edge devices has received more attention, mainly due to the advantages of AI at the edge: including privacy, low latency, and more reliable and effective use of network bandwidth. However, traditional computing architectures (such as CPUs, GPUs, FPGAs, and even existing AI accelerator ASICs) cannot meet the future needs of energy-constrained AI edge applications. This is because ML computing is data-centric, most of the energy in these architectures is consumed by memory accesses. In order to improve energy efficiency, both academia and industry are exploring a new computing architecture, namely compute in memory (CIM). CIM research is focused on a more analog approach with high-energy efficiency; however, lack of accuracy, due to a low SNR, is the main disadvantage; therefore, an analog approach may not be suitable for some applications that require high accuracy.
Year
DOI
Venue
2021
10.1109/ISSCC42613.2021.9365766
2021 IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE (ISSCC)
DocType
Volume
ISSN
Conference
64
0193-6530
Citations 
PageRank 
References 
1
0.35
0
Authors
20
Name
Order
Citations
PageRank
Yu-Der Chih110014.94
Po-Hao Lee231.10
Hidehiro Fujiwara37212.67
Yi-Chun Shih4698.05
Chia-Fu Lee5122.94
Rawan Naous640.76
Yu-Lin Chen7123.40
Chieh-Pu Lo810.35
Cheng-Han Lu910.69
Haruki Mori1010.69
Wei-Cheng Zhao1110.35
Dar Sun1292.31
Mahmut E. Sinangil13708.03
Yen-Huei Chen1410.35
Tan-Li Chou1510.69
Kerem Akarvardar1640.76
Hung-Jen Liao1710.35
Yih Wang187310.75
Meng-Fan Chang1945945.63
Jonathan Chang20155.29