Abstract | ||
---|---|---|
Recurrent transducer models have emerged as a promising solution for speech recognition on the current and next generation smart devices. The transducer models provide competitive accuracy within a reasonable memory footprint alleviating the memory capacity constraints in these devices. However, these models access parameters from off-chip memory for every input time step which adversely effects device battery life and limits their usability on low-power devices. We address transducer model's memory access concerns by optimizing their model architecture and designing novel recurrent cell designs. We demonstrate that i) model's energy cost is dominated by accessing model weights from off-chip memory, ii) transducer model architecture is pivotal in determining the number of accesses to off-chip memory and just model size is not a good proxy, iii) our transducer model optimizations and novel recurrent cell reduces off-chip memory accesses by 4.5x and model size by 2x with minimal accuracy impact. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICASSP39728.2021.9414502 | 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) |
Keywords | DocType | ISSN |
RNN-T, ASR, Recurrent Transducer, Automatic Speech Recognition, On-device Inference | Conference | ICASSP 2021 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ganesh Venkatesh | 1 | 274 | 17.97 |
Alagappan Valliappan | 2 | 0 | 0.34 |
Jay Mahadeokar | 3 | 9 | 4.94 |
Yuan Shangguan | 4 | 1 | 2.04 |
Christian Fuegen | 5 | 9 | 6.58 |
Michael L. Seltzer | 6 | 1027 | 69.42 |
Vikas Chandra | 7 | 691 | 59.76 |