Title
Self-training with Masked Supervised Contrastive Loss for Unknown Intents Detection
Abstract
The performance of many intent detection approaches will degrade when they meet open-set data because there is out-of-domain (OOD) noise. Some works utilize clean but expensive labeled data to supervise models more robust to the varied environment. However, a large number of labeled samples are scarce and their models no longer change after finishing training. To address this problem, we propose an iterative learning framework that can dynamically improve the model's ability of OOD intent detection and meanwhile continually obtain valuable new data to learn deep discriminative features. Concretely, the model can generate pseudo labels for unlabeled examples by self-training and use the local outlier factor (LOF) algorithm to detect unknown intents. Furthermore, we add mask operation on supervised contrastive loss (SCL) and use the masked-SCL to absorb new data effectively. Experiments on CLINC and SNIPS demonstrate that our proposed method can robustly realize intent detection in the presence of a high proportion of open-set.
Year
DOI
Venue
2021
10.1109/IJCNN52387.2021.9534224
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
Keywords
DocType
ISSN
OOD detection, self-training, supervised contrastive learning, local outlier factor
Conference
2161-4393
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Zijun Liu183.33
Yuanmeng Yan204.06
Keqing He303.04
Sihong Liu401.35
Hong Xu501.69
Weiran Xu621043.79