Title
8T XNOR-SRAM based Parallel Compute-in-Memory for Deep Neural Network Accelerator
Abstract
A 65nm XNOR-SRAM macro is presented for binary deep neural network (DNN) accelerator. It features 1) a custom XNOR-SRAM bit-cell design with 8 transistors (8T); 2) a fully parallel compute-in-memory capability of XNOR bit-counting for the inference. Multi-level sense amplifier is employed as Flash analog-to-digital converter (ADC) at edge of the array. The impact of ADC offset on the algorithm-level accuracy is statistically evaluated on a VGG-like XNOR-Net with the CIFAR-10 dataset, showing a marginal degradation ~1.77%. Read-disturb of SRAM bit-cell during parallel computation is also evaluated and its impact is minimized by lowering the word-line (WL) voltage. Silicon measurement results show that the proposed XNOR-SRAM unit-macro achieves a fast access time (2.3 ns) for parallel bit-counting and a high energy efficiency (44.8-62.8) TOPS/W depending on the WL voltage.
Year
DOI
Venue
2020
10.1109/MWSCAS48704.2020.9184455
2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS)
Keywords
DocType
ISSN
parallel compute-in-memory,binary deep neural network accelerator,compute-in-memory,XNOR bit-counting,multilevel sense amplifier,Flash analog-to-digital converter,ADC,VGG-like XNOR-Net,CIFAR-10 dataset,parallel bit-counting,XNOR-SRAM bit-cell design
Conference
1548-3746
ISBN
Citations 
PageRank 
978-1-7281-8059-5
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Hongwu Jiang1166.77
Rui Liu2475.32
Shimeng Yu349056.22