Title
Parameter Generation Algorithms for Text-To-Speech Synthesis with Recurrent Neural Networks.
Abstract
Recurrent Neural Networks (RNN) have recently proved to be effective in acoustic modeling for TTS. Various techniques such as the Maximum Likelihood Parameter Generation (MLPG) algorithm have been naturally inherited from the HMM-based speech synthesis framework. This paper investigates in which situations parameter generation and variance restoration approaches help for RNN-based TTS. We explore how their performance is affected by various factors such as the choice of the loss function, the application of regularization methods and the amount of training data. We propose an efficient way to calculate MLPG using a convolutional kernel. Our results show that the use of the L1 loss with proper regularization outperforms any system built with the conventional L2 loss and does not require to apply MLPG (which is necessary otherwise). We did not observe perceptual improvements when embedding MLPG into the acoustic model. Finally, we show that variance restoration approaches are important for cepstral features but only yield minor perceptual gains for the prediction of F0.
Year
DOI
Venue
2018
10.1109/SLT.2018.8639626
SLT
Keywords
Field
DocType
Acoustics,Hidden Markov models,Training,Trajectory,Covariance matrices,Kernel,Predictive models
Computer science,Cepstrum,Recurrent neural network,Regularization (mathematics),Artificial intelligence,Kernel (linear algebra),Speech synthesis,Embedding,Pattern recognition,Algorithm,Speech recognition,Hidden Markov model,Acoustic model
Conference
ISSN
ISBN
Citations 
2639-5479
978-1-5386-4334-1
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Viacheslav Klimkov153.19
Alexis Moinet210313.48
Adam Nadolski320.72
Thomas Drugman452641.79