Title
One In A Hundred: Selecting the Best Predicted Sequence from Numerous Candidates for Speech Recognition
Abstract
The RNN - Transducers and improved attention-based encoder-decoder models are widely applied to streaming speech recognition. Compared with these two end-to-end models, the CTC model is more efficient in training and inference. However, it cannot capture the linguistic dependencies between the output tokens. Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model. During inference, the CTC decoder first generates many candidates in a streaming fashion. Then the transformer decoder selects the best candidate based on the corresponding acoustic encoded states. The second -stage transformer decoder can be regarded as a conditional language model. We assume that a large enough number and enough diversity of candidates generated in the first stage can compensate the CTC model for the lack of language modeling ability. All the experiments are conducted on a Chinese Mandarin dataset AISHELL-l. The results show that our proposed model can implement streaming decoding in a fast and straightforward way. Our model can achieve up to a 20% reduction in the character error rate than the baseline CTC model. In addition, our model can also perform non-streaming inference with only a little performance degradation.
Year
Venue
DocType
2021
2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
Conference
ISSN
ISBN
Citations 
2640-009X
978-1-6654-4162-9
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Zhengkun Tian100.34
Jiangyan Yi200.34
Ye Bai301.35
Jianhua Tao4848138.00
Shuai Zhang500.34
Zhengqi Wen68624.41