Title
Sample Efficient Adaptive Text-to-Speech.
Abstract
We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies: (i) learning the speaker embedding while keeping the WaveNet core fixed, (ii) fine-tuning the entire architecture with stochastic gradient descent, and (iii) predicting the speaker embedding with a trained neural network encoder. The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.
Year
Venue
Field
2018
international conference on learning representations
Stochastic gradient descent,Architecture,Speech synthesis,Software deployment,Embedding,Naturalness,Speech recognition,Encoder,Artificial intelligence,Artificial neural network,Mathematics,Machine learning
DocType
Volume
Citations 
Journal
abs/1809.10460
2
PageRank 
References 
Authors
0.41
31
14
Name
Order
Citations
PageRank
Yutian Chen168036.28
Yannis M. Assael21296.51
Brendan Shillingford3142.73
David Budden416718.45
s reed5175080.25
Heiga Zen61922103.73
Quan Wang711520.15
Luis C. Cobo820.74
andrew trask9262.54
Ben Laurie10102.89
Çaglar Gülçehre113010133.22
Aäron Van Den Oord12158564.43
Oriol Vinyals139419418.45
Nando De Freitas143284273.68