Title
Deep Text-to-Speech System with Seq2Seq Model.
Abstract
Recent trends in neural network based text-to-speech/speech synthesis pipelines have employed recurrent Seq2seq architectures that can synthesize realistic sounding speech directly from text characters. These systems however have complex architectures and takes a substantial amount of time to train. We introduce several modifications to these Seq2seq architectures that allow for faster training time, and also allows us to reduce the complexity of the model architecture at the same time. We show that our proposed model can achieve attention alignment much faster than previous architectures and that good audio quality can be achieved with a model thatu0027s much smaller in size. Sample audio available at this https URL.
Year
Venue
DocType
2019
arXiv: Computation and Language
Journal
Volume
Citations 
PageRank 
abs/1903.07398
0
0.34
References 
Authors
5
1
Name
Order
Citations
PageRank
Gary Wang192.86