Abstract | ||
---|---|---|
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question. While current work on LFQA using large pre-trained model for generation are effective at producing fluent and somewhat relevant content, one primary challenge lies in how to generate a faithful answer that has less hallucinated content. We propose a new end-to-end framework that jointly models answer generation and machine reading. The key idea is to augment the generation model with fine-grained, answer-related salient information which can be viewed as an emphasis on faithful facts. State-of-the-art results on two LFQA datasets, ELI5 and MS MARCO, demonstrate the effectiveness of our method, in comparison with strong baselines on automatic and human evaluation metrics. A detailed analysis further proves the competency of our methods in generating fluent, relevant, and more faithful answers. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.findings-acl.61 | FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022) |
DocType | Volume | Citations |
Conference | Findings of the Association for Computational Linguistics: ACL 2022 | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dan Su | 1 | 0 | 0.34 |
Xiaoguang Li | 2 | 141 | 19.54 |
Jindi Zhang | 3 | 0 | 0.34 |
Lifeng Shang | 4 | 485 | 30.96 |
Xin Jiang | 5 | 150 | 32.43 |
Qun Liu | 6 | 2149 | 203.11 |
Pascale Fung | 7 | 678 | 85.84 |