Abstract | ||
---|---|---|
One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark. |
Year | Venue | DocType |
---|---|---|
2016 | ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016) | Conference |
Volume | ISSN | Citations |
29 | 1049-5258 | 41 |
PageRank | References | Authors |
1.52 | 14 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bertinetto Luca | 1 | 464 | 14.46 |
João F. Henriques | 2 | 1156 | 43.09 |
Jack Valmadre | 3 | 466 | 14.08 |
Philip H. S. Torr | 4 | 9140 | 636.18 |
Andrea Vedaldi | 5 | 8493 | 432.00 |