Abstract | ||
---|---|---|
In this paper, we present a reverberation removal approach for speaker verification, utilizing dual-label deep neural networks (DNNs). The networks perform feature mapping between the spectral features of reverberant and clean speech. Long short term memory recurrent neural networks (LSTMs) are trained to map corrupted Mel filterbank (MFB) features to two sets of labels: i) the clean MFB features, and ii) either estimated pitch tracks or the fast Fourier transform (FFT) spectrogram of clean speech. The performance of reverberation removal is evaluated by equal error rates (EERs) of speaker verification experiments. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Audio and Speech Processing | Speaker verification,Reverberation,Feature mapping,Spectrogram,Computer science,Filter bank,Recurrent neural network,Speech recognition,Fast Fourier transform,Deep neural networks |
DocType | Volume | Citations |
Journal | abs/1809.03868 | 0 |
PageRank | References | Authors |
0.34 | 6 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hao Zhang | 1 | 207 | 58.59 |
Stephen A. Zahorian | 2 | 59 | 12.93 |
Xiao Chen | 3 | 0 | 0.68 |
Peter Guzewich | 4 | 1 | 0.68 |
Xiaoyu Liu | 5 | 72 | 17.33 |