Title | ||
---|---|---|
What do you learn from context? Probing for sentence structure in contextualized word representations. |
Abstract | ||
---|---|---|
Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline. |
Year | Venue | DocType |
---|---|---|
2019 | ICLR | Conference |
Volume | Citations | PageRank |
abs/1905.06316 | 2 | 0.36 |
References | Authors | |
0 | 11 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ian Tenney | 1 | 4 | 3.79 |
patrick xia | 2 | 9 | 4.55 |
Berlin Chen | 3 | 151 | 34.59 |
Alex Wang | 4 | 71 | 5.27 |
Adam Poliak | 5 | 12 | 2.82 |
R. Thomas McCoy | 6 | 11 | 5.98 |
Najoung Kim | 7 | 4 | 2.44 |
Benjamin Van Durme | 8 | 1268 | 92.32 |
Samuel R. Bowman | 9 | 906 | 44.99 |
Dipanjan Das | 10 | 1619 | 75.14 |
Ellie Pavlick | 11 | 116 | 21.07 |