Title
Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model
Abstract
In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt multi-task learning, data augmentation, model pretraining and contrastive learning to enhance our model's coverage of pretrained knowledge. We experiment with various settings of our method to show the effectiveness of our approaches. Our final model achieved 37.78 F1 score, 22.94 SacreBLEU, 36.97 Meteor, 35.46 RougeL, a total of 133.15 on DialDoc Shared Task at ACL 2022 released test set.
Year
DOI
Venue
2022
10.18653/v1/2022.dialdoc-1.15
PROCEEDINGS OF THE SECOND DIALDOC WORKSHOP ON DOCUMENT-GROUNDED DIALOGUE AND CONVERSATIONAL QUESTION ANSWERING (DIALDOC 2022)
DocType
Volume
Citations 
Conference
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Yunah Jang100.34
Dongryeol Lee200.34
Hyung Joo Park300.34
Taegwan Kang411.07
Hwanhee Lee500.34
Hyunkyung Bae600.34
Kyomin Jung739437.38