Title
Training Text-To-Speech Systems From Synthetic Data: A Practical Approach For Accent Transfer Tasks
Abstract
Transfer tasks in text-to-speech (TTS) synthesis - where one or more aspects of the speech of one set of speakers is transferred to another set of speakers that do not feature these aspects originally - remains a challenging task. One of the challenges is that models that have high-quality transfer capabilities can have issues in stability, making them impractical for user-facing critical tasks. This paper demonstrates that transfer can be obtained by training a robust TTS system on data generated by a less robust TTS system designed for a high-quality transfer task; in particular, a CHiVE-BERT monolingual TTS system is trained on the output of a Tacotron model designed for accent transfer. While some quality loss is inevitable with this approach, experimental results show that the models trained on synthetic data this way can produce high quality audio displaying accent transfer, while preserving speaker characteristics such as speaking style.
Year
DOI
Venue
2022
10.21437/INTERSPEECH.2022-10115
Conference of the International Speech Communication Association (INTERSPEECH)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
12
Name
Order
Citations
PageRank
Lev Finkelstein152237.10
Heiga Zen21922103.73
Norman Casagrande3674.50
Chun-an Chan401.01
Ye Jia501.69
Tom Kenter601.01
Alexey Petelin700.34
Jonathan Shen8253.58
Vincent Wan901.01
Yu Zhang1044241.79
Yonghui Wu11106572.78
Rob Clark1200.34