Title
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Abstract
Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs' syntactic representational space. We use this technique to demonstrate that LSTM LMs' representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.
Year
DOI
Venue
2019
10.18653/v1/k19-1007
2986128786
Field
DocType
ISSN
Computer science,Priming (psychology),Artificial intelligence,Natural language processing,Syntax,Language model
Conference
CoNLL 2019
Citations 
PageRank 
References 
0
0.34
0
Authors
3
Name
Order
Citations
PageRank
Grusha Prasad101.35
Marten van Schijndel201.01
tal linzen35214.82