Title
Meta-learning by the Baldwin effect.
Abstract
The scope of the Baldwin effect was recently called into question by two papers that closely examined the seminal work of Hinton and Nowlan. To this date there has been no demonstration of its necessity in empirically challenging tasks. Here we show that the Baldwin effect is capable of evolving few-shot supervised and reinforcement learning mechanisms, by shaping the hyperparameters and the initial parameters of deep learning algorithms. Furthermore it can genetically accommodate strong learning biases on the same set of problems as a recent machine learning algorithm called MAML "Model Agnostic Meta-Learning" which uses second-order gradients instead of evolution to learn a set of reference parameters (initial weights) that can allow rapid adaptation to tasks sampled from a distribution. Whilst in simple cases MAML is more data efficient than the Baldwin effect, the Baldwin effect is more general in that it does not require gradients to be backpropagated to the reference parameters or hyperparameters, and permits effectively any number of gradient updates in the inner loop. The Baldwin effect learns strong learning dependent biases, rather than purely genetically accommodating fixed behaviours in a learning independent manner.
Year
DOI
Venue
2018
10.1145/3205651.3205763
GECCO (Companion)
DocType
Volume
ISBN
Conference
abs/1806.07917
978-1-4503-5764-7
Citations 
PageRank 
References 
3
0.38
20
Authors
9
Name
Order
Citations
PageRank
Chrisantha Fernando131424.46
Jakub Sygnowski2322.45
Simon Osindero34878398.74
Jane X. Wang4193.97
Tom Schaul591679.40
Denis Teplyashin6432.89
Pablo Sprechmann762524.21
Alexander Pritzel8301.97
Andrei A. Rusu91565.99