Title
Do RNNs learn human-like abstract word order preferences?
Abstract
RNN language models have achieved state-of-the-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, animacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNsu0027 performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations.
Year
Venue
DocType
2018
arXiv: Computation and Language
Journal
Volume
Citations 
PageRank 
abs/1811.01866
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
richard futrell11511.08
Roger Levy299667.40