Title
LAS-Transformer: An Enhanced Transformer Based on the Local Attention Mechanism for Speech Recognition
Abstract
Recently, Transformer-based models have shown promising results in automatic speech recognition (ASR), outperforming models based on recurrent neural networks (RNNs) and convolutional neural networks (CNNs). However, directly applying a Transformer to the ASR task does not exploit the correlation among speech frames effectively, leaving the model trapped in a sub-optimal solution. To this end, we propose a local attention Transformer model for speech recognition that combines the high correlation among speech frames. Specifically, we use relative positional embedding, rather than absolute positional embedding, to improve the generalization of the Transformer for speech sequences of different lengths. Secondly, we add local attention based on parametric positional relations to the self-attentive module and explicitly incorporate prior knowledge into the self-attentive module to make the training process insensitive to hyperparameters, thus improving the performance. Experiments carried out on the LibriSpeech dataset show that our proposed approach achieves a word error rate of 2.3/5.5% by language model fusion without any external data and reduces the word error rate by 17.8/9.8% compared to the baseline. The results are also close to, or better than, other state-of-the-art end-to-end models.
Year
DOI
Venue
2022
10.3390/info13050250
INFORMATION
Keywords
DocType
Volume
end-to-end model, speech recognition, Transformer, local attention
Journal
13
Issue
ISSN
Citations 
5
2078-2489
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Pengbin Fu100.34
Daxing Liu200.34
Huirong Yang351.12