Title
Conditional Generative Adversarial Networks for Acoustic Echo Cancellation
Abstract
This paper presents an acoustic echo canceller based on a conditional Generative Adversarial Network (cGAN) for single-talk and double-talk scenarios. cGANs have become a popular research topic in the audio processing area because of their ability to reproduce the finest details of audio signals. However, to the best of our knowledge, no previous works have used cGANs for Acoustic Echo Cancellation (AEC) in an end-to-end manner. The generator of the proposed cGAN framework is composed of a U-Net model able to synthesise the echo-free signal. The synthesised signal, conditioned by the estimated echo signal, is the input to the discriminator. The discriminator aims to refine the synthesised signals to convert them as realistic as possible. Experimental results have been carried out where the proposed cGAN has been compared to a GAN and a U-Net model in terms of echo return loss enhancement (ERLE) and perceptual evaluation of speech quality (PESQ) score for different values of Signal to Echo Ratio (SER). The cGAN outperforms the other two models for both ERLE and PESQ, and presents PESQ scores comparable to previous high-ranked echo cancellers of the AEC Challenge 2021.
Year
Venue
Keywords
2022
2022 30th European Signal Processing Conference (EUSIPCO)
Acoustic echo cancellation,deep learning,U-Net,conditional GANs
DocType
ISSN
ISBN
Conference
2219-5491
978-1-6654-6799-5
Citations 
PageRank 
References 
0
0.34
9
Authors
6