Title | ||
---|---|---|
A Study on the Efficacy of Model Pre-Training In Developing Neural Text-to-Speech System. |
Abstract | ||
---|---|---|
In the development of neural text-to-speech systems, model pre-training with a large amount of non-target speakers' data is a common approach. However, in terms of ultimately achieved system performance for target speaker(s), the actual benefits of model pre-training are uncertain and unstable, depending very much on the quantity and text content of training data. This study aims to understand better why and how model pre-training can positively contribute to TTS system performance. It is postulated that the pre-training process plays a critical role in learning text-related variation in speech, while further training with the target speaker's data aims to capture the speaker-related variation. Different test sets are created with varying degrees of similarity to target speaker data in terms of text content. Experiments show that leveraging a speaker-independent TTS trained on speech data with diverse text content can improve the target speaker TTS on domain-mismatched text. We also attempt to reduce the amount of pre-training data for a new text domain and improve the data and computational efficiency. It is found that the TTS system could achieve comparable performance when the pre-training data is reduced to 1/8 of its original size. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/ICASSP43922.2022.9746425 | IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Guangyan Zhang | 1 | 0 | 2.37 |
Leng Yichong | 2 | 0 | 2.37 |
Daxin Tan | 3 | 0 | 1.35 |
Ying Qin | 4 | 0 | 0.68 |
Kaitao Song | 5 | 7 | 4.26 |
Xu Tan | 6 | 88 | 23.94 |
Zhao Sheng | 7 | 10 | 3.04 |
Tan Lee | 8 | 0 | 0.68 |