Title
Know What You Don'T Know: Unanswerable Questions For Squad
Abstract
Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQUADRUN, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQUADRUN, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQUADRUN is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQUADRUN. We release SQUADRUN to the community as the successor to SQuAD.
Year
Venue
DocType
2018
PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2
Journal
Volume
Citations 
PageRank 
abs/1806.03822
24
0.72
References 
Authors
18
3
Name
Order
Citations
PageRank
Pranav Rajpurkar155524.99
Robin Jia222712.53
Percy Liang33416172.27