Title
FewJoint: few-shot learning for joint dialogue understanding
Abstract
Few-shot learning (FSL) is one of the key future steps in machine learning and raises a lot of attention. In this paper, we focus on the FSL problem of dialogue understanding, which contains two closely related tasks: intent detection and slot filling. Dialogue understanding has been proven to benefit a lot from jointly learning the two sub-tasks. However, such joint learning becomes challenging in the few-shot scenarios: on the one hand, the sparsity of samples greatly magnifies the difficulty of modeling the connection between the two tasks; on the other hand, how to jointly learn multiple tasks in the few-shot setting is still less investigated. In response to this, we introduce FewJoint, the first FSL benchmark for joint dialogue understanding. FewJoint provides a new corpus with 59 different dialogue domains from real industrial API and a code platform to ease FSL experiment set-up, which are expected to advance the research of this field. Further, we find that insufficient performance of the few-shot setting often leads to noisy sharing between two sub-task and disturbs joint learning. To tackle this, we guide slot with explicit intent information and propose a novel trust gating mechanism that blocks low-confidence intent information to ensure high quality sharing. Besides, we introduce a Reptile-based meta-learning strategy to achieve better generalization in unseen few-shot domains. In the experiments, the proposed method brings significant improvements on two datasets and achieve new state-of-the-art performance.
Year
DOI
Venue
2022
10.1007/s13042-022-01604-9
International Journal of Machine Learning and Cybernetics
Keywords
DocType
Volume
Few-shot learning, Joint learning, Dialogue understanding
Journal
13
Issue
ISSN
Citations 
11
1868-8071
0
PageRank 
References 
Authors
0.34
2
6
Name
Order
Citations
PageRank
Yutai Hou133.43
Xinghao Wang200.34
Cheng Chen300.34
Bohan Li400.34
Wanxiang Che571166.39
Zhigang Chen620434.10