Abstract | ||
---|---|---|
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the \miniI and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. |
Year | Venue | DocType |
---|---|---|
2019 | ICLR | Conference |
Volume | Citations | PageRank |
abs/1904.04232 | 21 | 0.59 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wei-Yu Chen | 1 | 51 | 2.75 |
Yen-Cheng Liu | 2 | 48 | 7.12 |
zsolt kira | 3 | 152 | 22.55 |
Yu-Chiang Frank Wang | 4 | 914 | 61.63 |
Jia-Bin Huang | 5 | 920 | 42.90 |