Title
Attention Based On-Device Streaming Speech Recognition with Large Speech Corpus.
Abstract
In this paper, we present a new on-device automatic speech recognition (ASR) system based on monotonic chunk-wise attention (MoChA) models trained with large (u003e 10K hours) corpus. We attained around 90% of a word recognition rate for general domain mainly by using joint training of connectionist temporal classifier (CTC) and cross entropy (CE) losses, minimum word error rate (MWER) training, layer-wise pretraining and data augmentation methods. In addition, we compressed our models by more than 3.4 times smaller using an iterative hyper low-rank approximation (LRA) method while minimizing the degradation in recognition accuracy. The memory footprint was further reduced with 8-bit quantization to bring down the final model size to lower than 39 MB. For on-demand adaptation, we fused the MoChA models with statistical n-gram models, and we could achieve a relatively 36% improvement on average in word error rate (WER) for target domains including the general domain.
Year
DOI
Venue
2019
10.1109/ASRU46091.2019.9004027
ASRU
Field
DocType
Citations 
Cross entropy,Speech corpus,Computer science,Word recognition,Word error rate,Speech recognition,Memory footprint,Quantization (signal processing),Classifier (linguistics),Connectionism
Conference
1
PageRank 
References 
Authors
0.37
0
13
Name
Order
Citations
PageRank
Kwangyoun Kim124.11
Seokyeong Jung210.37
Jungin Lee310.37
Myoungji Han4111.50
Chanwoo Kim510.37
Kyungmin Lee623.09
Dhananjaya Gowda735.47
Junmo Park810.37
Sungsoo Kim910.37
Sichen Jin1010.37
Young-Yoon Lee1110.37
Jinsu Yeo1210.37
Daehyun Kim1310.37