Abstract | ||
---|---|---|
End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders. |
Year | Venue | Field |
---|---|---|
2019 | ICLR | Question answering,Computer science,Artificial intelligence,Natural language processing,Encoder,Hierarchy,Test set |
DocType | Volume | Citations |
Journal | abs/1901.00603 | 0 |
PageRank | References | Authors |
0.34 | 37 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Victor Zhong | 1 | 0 | 1.01 |
Caiming Xiong | 2 | 969 | 69.56 |
nitish shirish keskar | 3 | 325 | 16.71 |
Richard Socher | 4 | 6770 | 230.61 |