Title
Deep Voice 2: Multi-Speaker Neural Text-to-Speech.
Abstract
We introduce a technique for augmenting neural text-to-speech (TTS) with low-dimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-of-the-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly.
Year
Venue
Field
2017
neural information processing systems
Deep voice,Voice analysis,Speech synthesis,Computer science,Speech recognition,Sound quality,Artificial intelligence,Natural language processing
DocType
Citations 
PageRank 
Conference
21
0.91
References 
Authors
16
8
Name
Order
Citations
PageRank
Andrew Gibiansky1995.61
Sercan Ömer Arik213114.47
Gregory Frederick Diamos3111751.07
John Miller4977.36
Kainan Peng5674.92
Wei Ping6865.40
Jonathan Raiman7532.83
Yanqi Zhou81186.94