Title
What do you learn from context? Probing for sentence structure in contextualized word representations.
Abstract
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
Year
Venue
DocType
2019
ICLR
Conference
Volume
Citations 
PageRank 
abs/1905.06316
2
0.36
References 
Authors
0
11
Name
Order
Citations
PageRank
Ian Tenney143.79
patrick xia294.55
Berlin Chen315134.59
Alex Wang4715.27
Adam Poliak5122.82
R. Thomas McCoy6115.98
Najoung Kim742.44
Benjamin Van Durme8126892.32
Samuel R. Bowman990644.99
Dipanjan Das10161975.14
Ellie Pavlick1111621.07