Title
Learning to teach and learn for semi-supervised few-shot image classification
Abstract
This paper presents a novel semi-supervised few-shot image classification method named Learning to Teach and Learn (LTTL) to effectively leverage unlabeled samples in small-data regimes. Our method is based on self-training, which assigns pseudo labels to unlabeled data. However, the conventional pseudo-labeling operation heavily relies on the initial model trained by using a handful of labeled data and may produce many noisy labeled samples. We propose to solve the problem with three steps: firstly, cherry-picking searches valuable samples from pseudo-labeled data by using a soft weighting network; and then, cross-teaching allows the classifiers to teach mutually for rejecting more noisy labels. A feature synthesizing strategy is introduced for cross-teaching to avoid clean samples being rejected by mistake; finally, the classifiers are fine-tuned with a few labeled data to avoid gradient drifts. We use the meta-learning paradigm to optimize the parameters in the whole framework. The proposed LTTL combines the power of meta-learning and self-training, achieving superior performance compared with the baseline methods on two public benchmarks.
Year
DOI
Venue
2021
10.1016/j.cviu.2021.103270
Computer Vision and Image Understanding
Keywords
DocType
Volume
41A05,41A10,65D05,65D17
Journal
212
Issue
ISSN
Citations 
1
1077-3142
0
PageRank 
References 
Authors
0.34
8
7
Name
Order
Citations
PageRank
Xinzhe Li1143.67
Jianqiang Huang25519.18
Yaoyao Liu3523.88
Qin Zhou401.01
Shibao Zheng521430.64
Bernt Schiele612901971.29
Sun Qianru722719.41