Abstract | ||
---|---|---|
With intelligent dialogue systems becoming more and more important in our daily lives, slot filling, one of the most important components of an intelligent dialogue system, has gotten a lot of attention from academia and industry. Despite many advancements in the single-domain learning paradigm for slot filling, leveraging resources from different domains to boost learning for a target domain remains a challenge. In contrast to prior methods that supplemented a sequence labeling model with slot meta-information, we address cross-domain slot filling as a machine reading comprehension (MRC) problem for the first time, where the extraction of slot values is viewed as a question answering process. In the framework above, we present both static and dynamic question generating mechanisms, which have complimentary effects in diverse cross-domain contexts. Furthermore, we devise a dynamic question generation approach that can generate numerous values for a slot at the same time. Finally, we construct a pre-training and fine-tuning training approach that enables us to improve learning by utilizing MRC’s resources. We conducted extensive experiments on four datasets to evaluate our approach, and the experimental results clearly justified the advantages of our approach in various cross-domain settings. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TASLP.2022.3140559 | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
Keywords | DocType | Volume |
Cross-domain slot filling,machine reading comprehension,slot filling | Journal | 30 |
Issue | ISSN | Citations |
1 | 2329-9290 | 0 |
PageRank | References | Authors |
0.34 | 13 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jian Liu | 1 | 31 | 5.77 |
Mengshi Yu | 2 | 0 | 0.68 |
Yufeng Chen | 3 | 0 | 3.38 |
Jin An Xu | 4 | 15 | 24.50 |