Title
Exploring Hierarchy-Aware Inverse Reinforcement Learning.
Abstract
We introduce a new generative model for human planning under the Bayesian Inverse Reinforcement Learning (BIRL) framework which takes into account the fact that humans often plan using hierarchical strategies. We describe the Bayesian Inverse Hierarchical RL (BIHRL) algorithm for inferring the values of hierarchical planners, and use an illustrative toy model to show that BIHRL retains accuracy where standard BIRL fails. Furthermore, BIHRL is able to accurately predict the goals of `Wikispeediau0027 game players, with inclusion of hierarchical structure in the model resulting in a large boost in accuracy. We show that BIHRL is able to significantly outperform BIRL even when we only have a weak prior on the hierarchical structure of the plans available to the agent, and discuss the significant challenges that remain for scaling up this framework to more realistic settings.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Inverse,Toy model,Computer science,Inverse reinforcement learning,Artificial intelligence,Hierarchy,Scaling,Machine learning,Bayesian probability,Generative model
DocType
Volume
ISSN
Journal
abs/1807.05037
1st Workshop on Goal Specifications for Reinforcement Learning, ICML 2018, Stockholm, Sweden, 2018
Citations 
PageRank 
References 
1
0.35
4
Authors
2
Name
Order
Citations
PageRank
Chris Cundy111.03
Daniel Filan221.17