Abstract | ||
---|---|---|
In this paper we analyze two question answering tasks: the TREC-8 question answering task and a set of reading comprehension exams. First, we show that Q/A systems perform better when there are multiple answer opportunities per question. Next, we analyze common approaches to two subproblems: term overlap for answer sentence identification, and answer typing for short answer extraction. We present general tools for analyzing the strengths and limitations of techniques for these sub-problems. Our results quantify the limitations of both term overlap and answer typing to distinguish between competing answer candidates. |
Year | DOI | Venue |
---|---|---|
2001 | 10.3115/1117856.1117857 | ODQA '01 Proceedings of the workshop on Open-domain question answering - Volume 12 |
Keywords | Field | DocType |
question answering task,trec-8 question answering task,answer sentence identification,general tool,comprehension exam,multiple answer opportunity,answer candidate,common approach,short answer extraction,question answering engine | Question answering,Information retrieval,Reading comprehension,Computer science,Artificial intelligence,Natural language processing,Sentence,Multiple choice | Journal |
Volume | Citations | PageRank |
cs.CL/0107006 | 13 | 2.32 |
References | Authors | |
3 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Eric Breck | 1 | 451 | 48.62 |
Marc Light | 2 | 13 | 2.32 |
Gideon S. Mann | 3 | 885 | 53.55 |
Ellen Riloff | 4 | 3154 | 454.55 |
Brianne Brown | 5 | 13 | 2.32 |
Pranav Anand | 6 | 260 | 19.70 |
Mats Rooth | 7 | 427 | 140.68 |
Michael Thelen | 8 | 198 | 15.83 |