Title
Injecting Text and Cross-Lingual Supervision in Few-Shot Learning from Self-Supervised Models.
Abstract
Self-supervised model pre-training has recently garnered significant interest, but relatively few efforts have explored using additional resources in fine-tuning these models. We demonstrate how universal phoneset acoustic models can leverage cross-lingual supervision to improve transfer of pretrained self-supervised representations to new languages. We also show how target-language text can be used to enable and improve fine-tuning with the lattice-free maximum mutual information (LF-MMI) objective. In three low-resource languages these techniques greatly improved few-shot learning performance.
Year
DOI
Venue
2022
10.1109/ICASSP43922.2022.9746852
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Matthew Wiesner121.39
Desh Raj200.34
Sanjeev Khudanpur32155202.00