Title
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
Abstract
We introduce a set of nine challenge tasks that test for the understanding of function words. These tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words (e.g., prepositions, wh-words). Using these probing tasks, we explore the effects of various pretraining objectives for sentence encoders (e.g., language modeling, CCG supertagging and natural language inference (NLI)) on the learned representations. Our results show that pretraining on CCG---our most syntactic objective---performs the best on average across our probing tasks, suggesting that syntactic knowledge helps function word comprehension. Language modeling also shows strong performance, supporting its widespread use for pretraining state-of-the-art NLP models. Overall, no pretraining objective dominates across the board, and our function word probing tasks highlight several intuitive differences between pretraining objectives, e.g., that NLI helps the comprehension of negation.
Year
Venue
Field
2019
arXiv: Computation and Language
Computer science,Function word,Artificial intelligence,Natural language processing,Comprehension
DocType
Citations 
PageRank 
Journal
1
0.36
References 
Authors
0
12
Name
Order
Citations
PageRank
Najoung Kim142.44
Roma Patel211.71
Adam Poliak311.04
Alex Wang4715.27
patrick xia594.55
R. Thomas McCoy6115.98
Ian Tenney743.79
Alexis Ross810.36
tal linzen95214.82
Benjamin Van Durme10126892.32
Samuel R. Bowman1190644.99
Ellie Pavlick1211621.07