Title
Towards Learning a Universal Non-Semantic Representation of Speech
Abstract
The ultimate goal of transfer learning is to reduce labeled data requirements by exploiting a pre-existing embedding model trained for different datasets or tasks. While significant progress has been made in the visual and language domains, the speech community has yet to identify a strategy with wide-reaching applicability across tasks. This paper describes a representation of speech based on an unsupervised triplet-loss objective, which exceeds state-of-the-art performance on a number of transfer learning tasks drawn from the non-semantic speech domain. The embedding is trained on a publicly available dataset, and it is tested on a variety of low-resource downstream tasks, including personalization tasks and medical domain. The model will be publicly released.
Year
DOI
Venue
2020
10.21437/Interspeech.2020-1242
INTERSPEECH
DocType
Citations 
PageRank 
Conference
8
0.50
References 
Authors
0
10
Name
Order
Citations
PageRank
Joel Shor1555.47
Lorena Álvarez250436.47
Maor Ronnie380.50
Oran Lang4502.76
Felix de Chaumont Quitry5222.44
Marco Tagliasacchi6146.71
Tuval Omry780.50
Shavitt Ira880.50
Emanuel Dotan980.50
Haviv Yinnon1080.50