Abstract | ||
---|---|---|
Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks. Inspired by the recent success of pretraining in language modelling, we propose an effective method for deploying response selection in task-oriented dialogue. To train response selection models for task-oriented dialogue tasks, we propose a novel method which: 1) pretrains the response selection model on large general-domain conversational corpora; and then 2) fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain. Our evaluation on six diverse application domains, ranging from e-commerce to banking, demonstrates the effectiveness of the proposed training method. |
Year | Venue | DocType |
---|---|---|
2019 | Meeting of the Association for Computational Linguistics | Journal |
Volume | Citations | PageRank |
abs/1906.01543 | 1 | 0.35 |
References | Authors | |
0 | 10 |
Name | Order | Citations | PageRank |
---|---|---|---|
Matthew Henderson | 1 | 158 | 8.90 |
Ivan Vulic | 2 | 462 | 52.59 |
Daniela Gerz | 3 | 39 | 4.68 |
Iñigo Casanueva | 4 | 1 | 0.35 |
Pawel Budzianowski | 5 | 59 | 9.50 |
Sam Coope | 6 | 2 | 1.37 |
Georgios P. Spithourakis | 7 | 12 | 2.07 |
Tsung-Hsien Wen | 8 | 475 | 24.92 |
Nikola Mrksic | 9 | 407 | 21.11 |
Pei-hao Su | 10 | 382 | 22.09 |