Title
Unsupervised Training of Neural Mask-Based Beamforming.
Abstract
We present an unsupervised training approach for a neural network-based mask estimator in an acoustic beamforming application. The network is trained to maximize a likelihood criterion derived from a spatial mixture model of the observations. It is trained from scratch without requiring any parallel data consisting of degraded input and clean training targets. Thus, training can be carried out on real recordings of noisy speech rather than simulated ones. In contrast to previous work on unsupervised training of neural mask estimators, our approach avoids the need for a possibly pre-trained teacher model entirely. We demonstrate the effectiveness of our approach by speech recognition experiments on two different datasets: one mainly deteriorated by noise (CHiME 4) and one by reverberation (REVERB). The results show that the performance of the proposed system is on par with a supervised system using oracle target masks for training and with a system trained using a model-based teacher.
Year
DOI
Venue
2019
10.21437/Interspeech.2019-2549
INTERSPEECH
DocType
Citations 
PageRank 
Conference
2
0.43
References 
Authors
0
3
Name
Order
Citations
PageRank
Lukas Drude19511.10
Jahn Heymann210210.29
Reinhold Haeb-Umbach31487211.71