Title
Dynamic Layer Normalization For Adaptive Neural Acoustic Modeling In Speech Recognition
Abstract
Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic. Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization. DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-556
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
DocType
Volume
speech recognition, adaptive acoustic model, dynamic layer normalization
Conference
abs/1707.06065
ISSN
Citations 
PageRank 
2308-457X
4
0.48
References 
Authors
15
3
Name
Order
Citations
PageRank
Taesup Kim1573.23
Inchul Song2555.72
Yoshua Bengio3426773039.83