Title
Provable Guarantees for Gradient-Based Meta-Learning.
Abstract
We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods. Our method is the first to simultaneously satisfy good sample efficiency guarantees in the convex setting, with generalization bounds that improve with task-similarity, while also being computationally scalable to modern deep learning architectures and the many-task setting. Despite its simplicity, the algorithm matches, up to a constant factor, a lower bound on the performance of any such parameter-transfer method under natural task similarity assumptions. We use experiments in both convex and deep learning settings to verify and demonstrate the applicability of our theory.
Year
Venue
DocType
2019
CoRR
Journal
Volume
Citations 
PageRank 
abs/1902.10644
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Mikhail Khodak143.09
Maria-Florina Balcan21445105.01
Talwalkar, Ameet3139466.51