Title
Refine bigram PLSA model by assigning latent topics unevenly.
Abstract
As an important component in many speech and language processing applications, statistical language model has been widely investigated. The bigram topic model, which combines advantages of both the traditional n-gram model and the topic model, turns out to be a promising language modeling approach. However, the original bigram topic model assigns the same topic number for each context word but ignores the fact that there are different complexities to the latent semantics of context words, we present a new bigram topic model, the bigram PLSA model, and propose a modified training strategy that unevenly assigns latent topics to context words according to an estimation of their latent semantic complexities. As a consequence, a refined bigram PLSA model, is reached. Experiments on HUB4 Mandarin test transcriptions reveal the superiority over existing models and further performance improvements on perplexity are achieved through the use of the refined bigram PLSA model. © 2007 IEEE.
Year
DOI
Venue
2007
null
ASRU
Keywords
Field
DocType
computational linguistics,matrix decomposition,language model,probability,unsupervised learning,natural language processing,speech processing,modeling language
Perplexity,Computer science,Bigram,Artificial intelligence,Natural language processing,Probabilistic latent semantic analysis,Language model,Word processing,Pattern recognition,Computational linguistics,Speech recognition,Topic model,Semantics
Conference
Volume
Issue
ISSN
null
null
null
ISBN
Citations 
PageRank 
978-1-4244-1746-9
7
0.55
References 
Authors
5
4
Name
Order
Citations
PageRank
Jiazhong Nie1454.72
Runxin Li2332.89
Dingsheng Luo34611.61
Xihong Wu427953.02