Abstract | ||
---|---|---|
This paper presents an evaluation of deep spectral mapping and WaveNet vocoder in voice conversion (VC). In our VC framework, spectral features of an input speaker are converted into those of a target speaker using the deep spectral mapping, and then together with the excitation features, the converted waveform is generated using WaveNet vocoder. In this work, we compare three different deep spectral mapping networks, i.e., a deep single density network (DSDN), a deep mixture density network (DMDN), and a long short-term memory recurrent neural network with an autoregressive output layer (LSTM-AR). Moreover, we also investigate several methods for reducing mismatches of spectral features used in WaveNet vocoder between training and conversion processes, such as some methods to alleviate oversmoothing effects of the converted spectral features, and another method to refine WaveNet using the converted spectral features. The experimental results demonstrate that the LSTM-AR yields nearly better spectral mapping accuracy than the others, and the proposed WaveNet refinement method significantly improves the naturalness of the converted waveform. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/SLT.2018.8639608 | SLT |
Keywords | Field | DocType |
Vocoders,Training,Feature extraction,Logic gates,Probability density function,Convolution,Trajectory | Mixture distribution,Autoregressive model,Logic gate,Pattern recognition,Computer science,Convolution,Waveform,Recurrent neural network,Feature extraction,Speech recognition,Artificial intelligence,Probability density function | Conference |
ISSN | ISBN | Citations |
2639-5479 | 978-1-5386-4334-1 | 1 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Patrick Lumban Tobing | 1 | 15 | 7.89 |
Tomoki Hayashi | 2 | 96 | 18.49 |
Yi-Chiao Wu | 3 | 45 | 9.42 |
Kazuhiro Kobayashi | 4 | 66 | 9.91 |
Tomoki Toda | 5 | 1874 | 167.18 |