Title
MIXTURE OF INFORMED EXPERTS FOR MULTILINGUAL SPEECH RECOGNITION
Abstract
When trained on related or low-resource languages, multilingual speech recognition models often outperform their monolingual counterparts. However, these models can suffer from loss in performance for high resource or unrelated languages. We investigate the use of a mixture-of-experts approach to assign per-language parameters in the model to increase network capacity in a structured fashion. We introduce a novel variant of this approach, 'informed experts', which attempts to tackle inter-task conflicts by eliminating gradients from other tasks in these task-specific parameters. We conduct experiments on a real-world task with English, French and four dialects of Arabic to show the effectiveness of our approach. Our model matches or outperforms the monolingual models for almost all languages, with gains of as much as 31% relative. Our model also outperforms the baseline multilingual model for all languages by up to 9% relative.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9414379
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
end-to-end speech recognition, multilingual, RNN-T, language id, mixture of experts
Conference
0
PageRank 
References 
Authors
0.34
0
8
Name
Order
Citations
PageRank
Neeraj Gaur101.35
Brian Farris201.35
Parisa Haghani341.75
Isabel Leal401.01
Pedro J. Moreno51256114.37
Manasa Prasad642.72
Bhuvana Ramabhadran71779153.83
Yun Zhu801.69