Title | ||
---|---|---|
A Parallel RRAM Synaptic Array Architecture for Energy-Efficient Recurrent Neural Networks |
Abstract | ||
---|---|---|
Recurrent neural networks (RNNs) provide excellent performance on applications with sequential data such as speech recognition. On-chip implementation of RNNs is difficult due to the significantly large number of parameters and computations. In this work, we first present a training method for LSTM model for language modeling on Penn Treebank dataset with binary weights and multi-bit activations and then map it onto a fully parallel RRAM array architecture ("XNOR-RRAM"). An energy-efficient XNOR-RRAM array based system for LSTM RNN is implemented and benchmarked on Penn Treebank dataset. Our results show that 4-bit activation precision can provide a near-optimal perplexity of 115.3 with an estimated energy-efficiency of ~27 TOPS/W. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/SiPS.2018.8598445 | 2018 IEEE International Workshop on Signal Processing Systems (SiPS) |
Keywords | Field | DocType |
training method,LSTM model,language modeling,Penn Treebank dataset,binary weights,multibit activations,fully parallel RRAM array architecture,energy-efficient XNOR-RRAM array based system,LSTM RNN,estimated energy-efficiency,parallel RRAM synaptic array architecture,energy-efficient recurrent neural networks,RNNs,sequential data | Perplexity,Pattern recognition,Computer science,Efficient energy use,Parallel computing,Recurrent neural network,Artificial intelligence,Treebank,Language model,Resistive random-access memory,Binary number,Computation | Conference |
ISSN | ISBN | Citations |
1520-6130 | 978-1-5386-6319-6 | 1 |
PageRank | References | Authors |
0.35 | 7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shihui Yin | 1 | 71 | 10.03 |
Xiaoyu Sun | 2 | 95 | 16.54 |
Shimeng Yu | 3 | 490 | 56.22 |
Jae-sun Seo | 4 | 536 | 56.32 |
Chaitali Chakrabarti | 5 | 1978 | 184.17 |