Title
Assessing Language Models with Scaling Properties.
Abstract
Language models have primarily been evaluated with perplexity. While perplexity quantifies the most comprehensible prediction performance, it does not provide qualitative information on the success or failure of models. Another approach for evaluating language models is thus proposed, using the scaling properties of natural language. Five such tests are considered, with the first two accounting for the vocabulary population and the other three for the long memory of natural language. The following models were evaluated with these tests: n-grams, probabilistic context-free grammar (PCFG), Simon and Pitman-Yor (PY) processes, hierarchical PY, and neural language models. Only the neural language models exhibit the long memory properties of natural language, but to a limited degree. The effectiveness of every test of these models is also discussed.
Year
Venue
Field
2018
arXiv: Computation and Language
Population,Perplexity,Computer science,Grammar,Natural language,Artificial intelligence,Natural language processing,Probabilistic logic,Vocabulary,Scaling,Language model,Machine learning
DocType
Volume
Citations 
Journal
abs/1804.08881
1
PageRank 
References 
Authors
0.36
19
2
Name
Order
Citations
PageRank
Shuntaro Takahashi110.36
Kumiko Tanaka-Ishii226136.69