Title | ||
---|---|---|
A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation |
Abstract | ||
---|---|---|
We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies. It builds on recently proposed plan-based neural generation models (Narayan et al., 2021) that are trained to first create a composition of the output and then generate by conditioning on it and the input. Our approach avoids text degeneration by first sampling a composition in the form of an entity chain and then using beam search to generate the best possible text grounded to this entity chain. Experiments on summarization (CNN/DailyMail and XSum) and question generation (SQuAD), using existing and newly proposed automatic metrics together with human-based evaluation, demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse meaningful outputs. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.acl-long.94 | PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS) |
DocType | Volume | Citations |
Conference | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shashi Narayan | 1 | 120 | 13.15 |
Gonçalo Simões | 2 | 0 | 0.34 |
Yao Zhao | 3 | 5 | 1.75 |
Joshua Maynez | 4 | 2 | 2.39 |
Dipanjan Das | 5 | 0 | 0.34 |
Michael Collins | 6 | 0 | 0.34 |
Mirella Lapata | 7 | 5973 | 369.52 |