Title
Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.
Abstract
We propose using self-supervised discrete representations for the task of speech resynthesis. To generate disentangled representation, we separately extract low-bitrate representations for speech content, prosodic information, and speaker identity. This allows to synthesize speech in a controllable manner. We analyze various state-of-the-art, self-supervised representation learning methods and shed light on the advantages of each method while considering reconstruction quality and disentanglement properties. Specifically, we evaluate the F0 reconstruction, speaker identification performance (for both resynthesis and voice conversion), recordings' intelligibility, and overall quality using subjective human evaluation. Lastly, we demonstrate how these representations can be used for an ultra-lightweight speech codec. Using the obtained representations, we can get to a rate of 365 bits per second while providing better speech quality than the baseline methods. Audio samples can be found under the following link: \url{https://resynthesis-ssl.github.io/}.
Year
DOI
Venue
2021
10.21437/Interspeech.2021-475
Interspeech
DocType
Citations 
PageRank 
Conference
2
0.35
References 
Authors
0
8
Name
Order
Citations
PageRank
Adam Polyak1506.09
Yossi Adi2879.18
Jade Copet340.70
Eugene Kharitonov463.20
Kushal Lakhotia540.70
Wei-Ning Hsu611513.93
Abdelrahman Mohamed7151.70
Abdel-rahman Mohamed83772266.13