Abstract | ||
---|---|---|
The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments. This paper conducts a theoretical investigation of this training strategy on generalization. From a stability perspective, we analyze the generalization error bound of generic meta-learning algorithms trained with such strategy. We show that the S/Q episodic training strategy naturally leads to a counterintuitive generalization bound of O (1/root n), which only depends on the task number n but independent of the inner-task sample size m. Under the common assumption m << n for few-shot learning, the bound of O (1/root n) implies strong generalization guarantees for modern meta-learning algorithms in the few-shot regime. To further explore the influence of training strategies on generalization, we propose a leave-one-out (LOO) training strategy for meta-learning and compare it with S/Q training. Experiments on standard few-shot regression and classification tasks with popular meta-learning algorithms validate our analysis. |
Year | Venue | DocType |
---|---|---|
2020 | ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020) | Conference |
Volume | ISSN | Citations |
33 | 1049-5258 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiaxin Chen | 1 | 3 | 1.10 |
Xiao-Ming Wu | 2 | 0 | 0.34 |
Yanke Li | 3 | 0 | 0.34 |
Xiao-Ming Wu | 4 | 110 | 7.15 |
Liming Zhan | 5 | 3 | 3.13 |
Fu-lai Chung | 6 | 244 | 34.50 |