Title
Improving N-gram language modeling for code-switching speech recognition.
Abstract
Code-switching language modeling is challenging due to statistics of each individual language, as well as statistics of cross-lingual language are insufficient. To compensate for the issue of statistical insufficiency, in this paper we propose a word-class n-gram language modeling approach of which only infrequent words are clustered while most frequent words are treated as singleton classes themselves. We first demonstrate the effectiveness of the proposed method on our English-Mandarin code-switching SEAME data in terms of perplexity. Compared with the conventional word n-gram language models, as well as the word-class n-gram language models of which entire vocabulary words are clustered, the proposed word-class n-gram language modeling approach can yield lower perplexity on our SEAME dev data sets. Additionally, we observed further perplexity reduction by interpolating the word n-gram language models with the proposed word-class n-gram language models. We also attempted to build word-class n-gram language models using third-party text data with our proposed method, and similar perplexity performance improvement was obtained on our SEAME der data sets when they are interpolated with the word n-gram language models. Finally, to examine the contribution of the proposed language modeling approach to code-switching speech recognition, we conducted lattice based n-best rescoring.
Year
Venue
Field
2017
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
Perplexity,Data modeling,Data set,Code-switching,Computer science,Speech recognition,n-gram,Vocabulary,Language model,Performance improvement
DocType
ISSN
Citations 
Conference
2309-9402
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Zhiping Zeng113.06
Haihua Xu25511.41
Tze Yuang Chong393.59
Eng Siong Chng4970106.33
Haizhou Li53678334.61