Title
MEMORY-EFFICIENT SPEECH RECOGNITION ON SMART DEVICES
Abstract
Recurrent transducer models have emerged as a promising solution for speech recognition on the current and next generation smart devices. The transducer models provide competitive accuracy within a reasonable memory footprint alleviating the memory capacity constraints in these devices. However, these models access parameters from off-chip memory for every input time step which adversely effects device battery life and limits their usability on low-power devices. We address transducer model's memory access concerns by optimizing their model architecture and designing novel recurrent cell designs. We demonstrate that i) model's energy cost is dominated by accessing model weights from off-chip memory, ii) transducer model architecture is pivotal in determining the number of accesses to off-chip memory and just model size is not a good proxy, iii) our transducer model optimizations and novel recurrent cell reduces off-chip memory accesses by 4.5x and model size by 2x with minimal accuracy impact.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9414502
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
ISSN
RNN-T, ASR, Recurrent Transducer, Automatic Speech Recognition, On-device Inference
Conference
ICASSP 2021
Citations 
PageRank 
References 
0
0.34
0
Authors
7
Name
Order
Citations
PageRank
Ganesh Venkatesh127417.97
Alagappan Valliappan200.34
Jay Mahadeokar394.94
Yuan Shangguan412.04
Christian Fuegen596.58
Michael L. Seltzer6102769.42
Vikas Chandra769159.76