Title
Dry, Focus, and Transcribe: End-to-End Integration of Dereverberation, Beamforming, and ASR.
Abstract
Sequence-to-sequence (S2S) modeling is becoming a popular paradigm for automatic speech recognition (ASR) because of its ability to jointly optimize all the conventional ASR components in an end-to-end (E2E) fashion. This paper extends the ability of E2E ASR from standard close-talk to far-field applications by encompassing entire multichannel speech enhancement and ASR components within the S2S model. There have been previous studies on jointly optimizing neural beamforming alongside E2E ASR for denoising. It is clear from both recent challenge outcomes and successful products that far-field systems would be incomplete without solving both denoising and dereverberation simultaneously. This paper proposes a novel architecture for far-field ASR by composing neural extensions of dereverberation and beamforming modules with the S2S ASR module as a single differentiable neural network and also clearly defining the role of each subnetwork. To our knowledge, this is the first successful demonstration of such a system, which we term DFTnet (dry, focus, and transcribe). It achieves better performance than conventional pipeline methods on the DIRHA English dataset and comparable performance on the REVERB dataset. It also has additional advantages of being neither iterative nor requiring parallel noisy and clean speech data.
Year
Venue
DocType
2019
arXiv: Audio and Speech Processing
Journal
Volume
Citations 
PageRank 
abs/1904.09049
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
S. Aswin Shanmugam174.21
Xiaofei Wang2107.42
Shinji Watanabe300.68
Toru Taniguchi4142.93
Dung T. Tran511.36
Yuya Fujita601.01