Title
Adaptive Gradient-Based Meta-Learning Methods.
Abstract
We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their meta-test-time performance on standard problems in few-shot learning and federated learning.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
federated learning
Field
DocType
Volume
Computer science,Artificial intelligence,Machine learning
Journal
32
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Mikhail Khodak1103.19
Maria-Florina Balcan21445105.01
Talwalkar, Ameet3139466.51