Title
The Zero Resource Speech Challenge 2020: Discovering discrete subword and word units
Abstract
We present the Zero Resource Speech Challenge 2020, which aims at learning speech representations from raw audio signals without any labels. It combines the data sets and metrics from two previous benchmarks (2017 and 2019) and features two tasks which tap into two levels of speech representation. The first task is to discover low bit-rate subword representations that optimize the quality of speech synthesis; the second one is to discover word-like units from unsegmented raw speech. We present the results of the twenty submitted models and discuss the implications of the main findings for unsupervised speech learning.
Year
DOI
Venue
2020
10.21437/Interspeech.2020-2743
INTERSPEECH
DocType
ISSN
Citations 
Conference
Proceedings of Interspeech 2020
2
PageRank 
References 
Authors
0.35
0
9
Name
Order
Citations
PageRank
Ewan Dunbar1715.08
Julien Karadayi251.71
Mathieu Bernard3102.48
Xuan-Nga Cao4263.52
Robin Algayres520.35
Lucas Ondel6357.16
laurent besacier7696102.67
Sakriani Sakti825765.02
Emmanuel Dupoux923837.33