Abstract | ||
---|---|---|
In this paper, we propose an efficient end-to-end neural network that can estimate near-end speech using a U-convolution block by exploiting various signals to achieve residual echo suppression (RES). Specifically, the proposed model employs multiple encoders and an integration block to utilize complete signal information in an acoustic echo cancellation system and also applies the U-convolution blocks to separate near-end speech efficiently. The proposed network affords an improvement in the perceptual evaluation of speech quality (PESQ) and the short-time objective intelligibility (STOI), as compared to baselines, in scenarios involving smart audio devices. The experimental results show that the proposed method outperforms the baselines for various types of mismatched background noise and environmental reverberation, while requiring low computational resources. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICASSP39728.2021.9414753 | 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) |
Keywords | DocType | Citations |
Acoustic Echo Cancellation, Residual Echo Suppression, End-To-End Neural Network | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Eesung Kim | 1 | 1 | 1.73 |
Jae-Jin Jeon | 2 | 0 | 0.68 |
Hyeji Seo | 3 | 0 | 0.34 |