Title
The Performance Appraising of Language Models and the Entropy Estimation of Chinese
Abstract
We give a quantified reasoning and description of the perplexity for evaluating language models using the concept of entropy in information theory: The smaller the entropy of the language estimated by the language model is, the more precise the language model is; an interpolated model based on two (n-1)-gram models is better than the (n-1)-gram component models, but not a n-gram model. We also explore the methods to estimating the entropy of Chinese using language models.
Year
DOI
Venue
2007
10.1109/FSKD.2007.579
FSKD (2)
Keywords
Field
DocType
gram model,quantified reasoning,gram component model,n-gram model,language models,language model,information analysis,information theory,entropy estimation,natural language processing,entropy,interpolated model,component model
Information theory,Entropy estimation,Perplexity,Computer science,Interpolation,Speech recognition,Information diagram,Natural language processing,Artificial intelligence,Machine learning,Language model
Conference
Volume
ISBN
Citations 
2
978-0-7695-2874-8
0
PageRank 
References 
Authors
0.34
4
3
Name
Order
Citations
PageRank
Yangsen Zhang11112.10
Gaijuan Huang201.01
Miao Mai300.34