Title
Few-shot Learning with Online Self-Distillation
Abstract
Few-shot learning has been a long-standing problem in learning to learn. This problem typically involves training a model on an extremely small amount of data and testing the model on the out-of-distribution data. The focus of recent few-shot learning research has been on the development of good representation models that can quickly adapt to test tasks. To that end, we come up with a model that learns representation through online self-distillation. Our model combines supervised training with knowledge distillation via a continuously updated teacher. We also identify that data augmentation plays an important role in producing robust features. Our final model is trained with CutMix augmentation and online self-distillation. On the commonly used benchmark minilmageNet, our model achieves 67.07% and 83.03% under the 5-way 1-shot setting and the 5-way 5-shot setting, respectively. It outperforms counterparts of its kind by 2.25% and 0.89%.
Year
DOI
Venue
2021
10.1109/ICCVW54120.2021.00124
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021)
DocType
Volume
Issue
Conference
2021
1
ISSN
Citations 
PageRank 
2473-9936
0
0.34
References 
Authors
2
2
Name
Order
Citations
PageRank
Sihan Liu100.34
Yue Wang22537.65