Abstract | ||
---|---|---|
A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out {it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned policy with an information bottleneck, we can identify decision states by examining where the model actually leverages the goal state. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space. |
Year | Venue | Field |
---|---|---|
2019 | International Conference on Learning Representations | Sensory cue,Artificial intelligence,Information bottleneck method,State space,Mathematics,Machine learning,Reinforcement learning |
DocType | Volume | Citations |
Journal | abs/1901.10902 | 4 |
PageRank | References | Authors |
0.38 | 26 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Anirudh Goyal | 1 | 264 | 20.97 |
Riashat Islam | 2 | 162 | 8.27 |
Daniel Strouse | 3 | 4 | 0.38 |
Zafarali Ahmed | 4 | 5 | 0.73 |
Matthew M Botvinick | 5 | 494 | 25.34 |
Hugo Larochelle | 6 | 7692 | 488.99 |
Sergey Levine | 7 | 3377 | 182.21 |
Yoshua Bengio | 8 | 42677 | 3039.83 |