Title
Entropy Rate Estimation for English via a Large Cognitive Experiment Using Mechanical Turk.
Abstract
The entropy rate h of a natural language quantifies the complexity underlying the language. While recent studies have used computational approaches to estimate this rate, their results rely fundamentally on the performance of the language model used for prediction. On the other hand, in 1951, Shannon conducted a cognitive experiment to estimate the rate without the use of any such artifact. Shannon's experiment, however, used only one subject, bringing into question the statistical validity of his value of h=1.3 bits per character for the English language entropy rate. In this study, we conducted Shannon's experiment on a much larger scale to reevaluate the entropy rate h via Amazon's Mechanical Turk, a crowd-sourcing service. The online subjects recruited through Mechanical Turk were each asked to guess the succeeding character after being given the preceding characters until obtaining the correct answer. We collected 172,954 character predictions and analyzed these predictions with a bootstrap technique. The analysis suggests that a large number of character predictions per context length, perhaps as many as 10(3), would be necessary to obtain a convergent estimate of the entropy rate, and if fewer predictions are used, the resulting h value may be underestimated. Our final entropy estimate was h approximate to 1.22 bits per character.
Year
DOI
Venue
2019
10.3390/e21121201
ENTROPY
Keywords
Field
DocType
entropy rate,natural language,crowd source,Amazon Mechanical Turk,Shannon entropy
Entropy rate,English language,Validity,Natural language,Cognition,Statistics,Entropy (information theory),Mathematics,Bootstrapping (electronics),Language model
Journal
Volume
Issue
Citations 
21
12
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Geng Ren100.34
Shuntaro Takahashi221.38
Kumiko Tanaka-Ishii326136.69