Title
Towards Unsupervised Speech-To-Text Translation
Abstract
We present a framework for building speech-to-text translation (ST) systems using only monolingual speech and text corpora, in other words, speech utterances from a source language and independent text from a target language. As opposed to traditional cascaded systems and end-to-end architectures, our system does not require any labeled data (i.e., transcribed source audio or parallel source and target text corpora) during training, making it especially applicable to language pairs with very few or even zero bilingual resources. The framework initializes the ST system with a cross-modal bilingual dictionary inferred from the monolingual corpora, that maps every source speech segment corresponding to a spoken word to its target text translation. For unseen source speech utterances, the system first performs word-by-word translation on each speech segment in the utterance. The translation is improved by leveraging a language model and a sequence denoising autoencoder to provide prior knowledge about the target language. Experimental results show that our unsupervised system achieves comparable BLEU scores to supervised end-to-end models despite the lack of supervision. We also provide an ablation analysis to examine the utility of each component in our system.
Year
Venue
Keywords
2018
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
speech-to-text translation, unsupervised speech processing, speech2vec, bilingual lexicon induction
Field
DocType
Volume
Target text,Spoken word,Bilingual dictionary,Computer science,Utterance,Text corpus,Speech recognition,Denoising autoencoder,Labeled data,Language model
Journal
abs/1811.01307
ISSN
Citations 
PageRank 
1520-6149
1
0.35
References 
Authors
0
4
Name
Order
Citations
PageRank
Yu-An Chung1538.47
Wei-Hung Weng2145.60
Schrasing Tong341.41
James Glass43123413.63