Title
Ene-to-end training of time domain audio separation and recognition
Abstract
The rising interest in single-channel multi-speaker speech separation sparked development of End-to-End (E2E) approaches to multi-speaker speech recognition. However, up until now, state-of-the-art neural network-based time domain source separation has not yet been combined with E2E speech recognition. We here demonstrate how to combine a separation module based on a Convolutional Time domain Audio Separation Network (Conv-TasNet) with an E2E speech recognizer and how to train such a model jointly by distributing it over multiple GPUs or by approximating truncated back-propagation for the convolutional front-end. To put this work into perspective and illustrate the complexity of the design space, we provide a compact overview of single-channel multi-speaker recognition systems. Our experiments show a word error rate of 11.0% on WSJ0-2mix and indicate that our joint time domain model can yield substantial improvements over cascade DNN-HMM and monolithic E2E frequency domain systems proposed so far.
Year
DOI
Venue
2020
10.1109/ICASSP40776.2020.9053461
ICASSP
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Thilo von Neumann162.57
Keisuke Kinoshita249454.81
Lukas Drude39511.10
Boeddeker Christoph433.84
Marc Delcroix569962.07
Tomohiro Nakatani61327139.18
Reinhold Haeb-Umbach71487211.71