Title
A Bayesian model of biases in artificial language learning: the case of a word-order universal.
Abstract
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners inferences are systematically affected by specific prior biases. These biases are in line with a typological generalizationGreenberg's Universal 18which bans a particular word-order pattern relating nouns, adjectives, and numerals.
Year
DOI
Venue
2012
10.1111/j.1551-6709.2012.01264.x
COGNITIVE SCIENCE
Keywords
Field
DocType
Bayesian modeling,Learning biases,Artificial language learning,Typology,Word order
Cognitive bias,Rule-based machine translation,Word order,Bayesian inference,Noun,Psychology,Cognitive psychology,Language acquisition,Natural language processing,Artificial intelligence,Constructed language,Bayesian statistics
Journal
Volume
Issue
ISSN
36
8.0
0364-0213
Citations 
PageRank 
References 
2
0.66
3
Authors
2
Name
Order
Citations
PageRank
Jennifer Culbertson145.80
Paul Smolensky221593.76