Title
State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning.
Abstract
Typical reinforcement learning (RL) agents learn to complete tasks specified by reward functions tailored to their domain. As such, the policies they learn do not generalize even to similar domains. To address this issue, we develop a framework through which a deep RL agent learns to generalize policies from smaller, simpler domains to more complex ones using a recurrent attention mechanism. The task is presented to the agent as an image and an instruction specifying the goal. This meta-controller guides the agent towards its goal by designing a sequence of smaller subtasks on the part of the state space within the attention, effectively decomposing it. As a baseline, we consider a setup without attention as well. Our experiments show that the meta-controller learns to create subgoals within the attention.
Year
Venue
Field
2017
arXiv: Artificial Intelligence
Computer science,Artificial intelligence,State space,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1705.08997
0
PageRank 
References 
Authors
0.34
3
5
Name
Order
Citations
PageRank
Himanshu Sahni1273.99
Saurabh Kumar2101.83
Farhan Tejani330.72
Yannick Schroecker400.68
Charles L. Isbell550465.79