Title
Joint unsupervised adaptation of n-gram and RNN language models via LDA-based hybrid mixture modeling.
Abstract
This paper reports an initial study of unsupervised adaptation that assumes simultaneous use of both n-gram and recurrent neural network (RNN) language models (LMs) in automatic speech recognition (ASR). It is known that a combination of n-grams and RNN LMs is a more effective approach to ASR than using each of them singly. However, unsupervised adaptation methods that simultaneously adapt both n-grams and RNN LMs have not been presented while various unsupervised adaptation methods specific to either n-gram LMs or RNN LMs have been examined. In order to handle different LMs in a unified unsupervised adaptation framework, our key idea is to introduce mixture modeling for both n-gram LMs and RNN LMs. The mixture modeling can simultaneously handle multiple LMs and unsupervised adaptation can be easily accomplished merely by adjusting their mixture weights using a recognition hypothesis of an input speech. This paper proposes joint unsupervised adaptation achieved by a hybrid mixture modeling using both n-gram mixture models and RNN mixture models. We present latent Dirichlet allocation based hybrid mixture modeling for effective topic adaptation. Our experiments in lecture ASR tasks show the effectiveness of joint unsupervised adaptation. We also reveal performance in which only one n-gram or RNN LM is adapted.
Year
Venue
Field
2017
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
Latent Dirichlet allocation,Computer science,Interpolation,Recurrent neural network,Speech recognition,n-gram,Decoding methods,Hidden Markov model,Language model,Mixture model
DocType
ISSN
Citations 
Conference
2309-9402
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Ryo Masumura12528.24
Taichi Asami22210.49
Hirokazu Masataki3189.21
Yushi Aono4711.02