Abstract | ||
---|---|---|
An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-of-the-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills. |
Year | DOI | Venue |
---|---|---|
2018 | 10.18653/v1/P18-1064 | ACL (1) |
Field | DocType | Volume |
Automatic summarization,Logical consequence,Computer science,Salience (neuroscience),Artificial intelligence,Encoder,Natural language processing,Question generation,Salient | Conference | 1 |
Citations | PageRank | References |
3 | 0.55 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Han Guo | 1 | 125 | 10.98 |
Ramakanth Pasunuru | 2 | 25 | 3.69 |
Mohit Bansal | 3 | 871 | 63.19 |