Abstract | ||
---|---|---|
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have the practical difficulties of operating in high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a low-dimensional latent generative representation of model parameters and performing gradient-based meta-learning in this space with latent embedding optimization (LEO), effectively decoupling the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive 5-way 1-shot miniImageNet classification task. |
Year | Venue | Field |
---|---|---|
2018 | international conference on learning representations | Embedding,Artificial intelligence,Generative grammar,Machine learning,Mathematics |
DocType | Volume | Citations |
Journal | abs/1807.05960 | 28 |
PageRank | References | Authors |
0.69 | 14 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Andrei A. Rusu | 1 | 169 | 6.80 |
Dushyant Rao | 2 | 116 | 8.10 |
Jakub Sygnowski | 3 | 32 | 2.45 |
Oriol Vinyals | 4 | 9419 | 418.45 |
Razvan Pascanu | 5 | 2596 | 199.21 |
Simon Osindero | 6 | 4878 | 398.74 |
R. Hadsell | 7 | 1678 | 100.80 |