Title
TRANSFORMER-TRANSDUCERS FOR CODE-SWITCHED SPEECH RECOGNITION
Abstract
We live in a world where 60% of the population can speak two or more languages fluently. Members of these communities constantly switch between languages when having a conversation. As automatic speech recognition (ASR) systems are being deployed to the real-world, there is a need for practical systems that can handle multiple languages both within an utterance or across utterances. In this paper, we present an end-to-end ASR system using a transformer-transducer model architecture for code-switched speech recognition. We propose three modifications over the vanilla model in order to handle various aspects of code-switching. First, we introduce two auxiliary loss functions to handle the low-resource scenario of code-switching. Second, we propose a novel mask-based training strategy with language ID information to improve the label encoder training towards intra-sentential code-switching. Finally, we propose a multi-label/multi-audio encoder structure to leverage the vast monolingual speech corpora towards code-switching. We demonstrate the efficacy of our proposed approaches on the SEAME dataset, a public Mandarin-English code-switching corpus, achieving a mixed error rate of 18.5% and 26.3% on testman and testsge sets respectively.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9413562
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
code-switching, end-to-end, neural transducers
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Siddharth Dalmia113.10
Yuzong Liu2966.63
Srikanth Ronanki300.68
Katrin Kirchhoff4102695.24