Title
Tunable Discounting Mechanisms for Language Modeling.
Abstract
Language models are fundamental to many applications in natural language processing. Most language models are trained on training data that do not support discount parameters tuning. In this work, we present novel language models based on tunable discounting mechanisms. The language models are trained on a large training set, but their discount parameters can be tuned to a target set. We explore tunable discounting and polynomial discounting based on modified Kneser-Ney models. With the resulting implementation, our language models achieve perplexity improvements in in-domain and out-of-domain evaluation. The experimental results indicate that our new models significantly outperform the baseline model and are especially suited for domain adaptation.
Year
DOI
Venue
2015
10.1007/978-3-319-23862-3_58
IScIDE
Keywords
Field
DocType
Language model, Tunable discounting, Polynomial discounting, Domain adaptation
Training set,Perplexity,Polynomial,Discounting,Domain adaptation,Computer science,Artificial intelligence,Machine learning,Language model
Conference
Volume
ISSN
Citations 
9243
0302-9743
1
PageRank 
References 
Authors
0.36
4
5
Name
Order
Citations
PageRank
Junfei Guo173.01
Juan Liu21128145.32
Xianlong Chen310.69
Qi Han4114.90
Kunxiao Zhou510.36