Title
Effective combination of pretrained models - KIT@IWSLT2022.
Abstract
Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches. In this evaluation, we aim at empirically looking for the answer by using the wav2vec, mBART50 and DeltaLM models to improve text and speech translation models. The experiments showed that the presence of these models together with an advanced audio segmentation method results in an improvement over the previous end-to-end system by up to 7 BLEU points. More importantly, the experiments showed that given enough data and modeling capacity to overcome the training difficulty, we can outperform even very competitive Cascade systems. In our experiments, this gap can be as large as 2.0 BLEU points, the same gap that the Cascade often led over the years.
Year
DOI
Venue
2022
10.18653/v1/2022.iwslt-1.14
International Conference on Spoken Language Translation (IWSLT)
DocType
Volume
Citations 
Conference
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Ngoc-Quan Pham131.74
Tuan Nam Nguyen200.34
Thai-Binh Nguyen300.34
Danni Liu412.38
Carlos Mullov500.68
Jan Niehues625939.48
Alex Waibel763431980.68