Title
Metacontrol for Adaptive Imagination-Based Optimization.
Abstract
Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this "one-size-fits-all" approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of "imagined" internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call "experts") can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with "interaction networks" (Battaglia et al., 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using...
Year
Venue
Field
2017
ICLR
Control theory,Computer science,Artificial intelligence,Error-driven learning,Factoring,Imagination,Computational resource,Machine learning,Computation,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1705.02670
18
PageRank 
References 
Authors
0.83
1
6
Name
Order
Citations
PageRank
Jessica B. Hamrick1669.25
Andrew J. Ballard21063.48
Razvan Pascanu32596199.21
Oriol Vinyals49419418.45
Nicolas Heess5176294.77
Peter Battaglia644222.63