Title
Active Reinforcement Learning with Monte-Carlo Tree Search.
Abstract
Active Reinforcement Learning (ARL) is a twist on RL where the agent observes reward information only if it pays a cost. This subtle change makes exploration substantially more challenging. Powerful principles in RL like optimism, Thompson sampling, and random exploration do not help with ARL. We relate ARL in tabular environments to Bayes-Adaptive MDPs. We provide an ARL algorithm using Monte-Carlo Tree Search that is asymptotically Bayes optimal. Experimentally, this algorithm is near-optimal on small Bandit problems and MDPs. On larger MDPs it outperforms a Q-learner augmented with specialised heuristics for ARL. By analysing exploration behaviour in detail, we uncover obstacles to scaling up simulation-based algorithms for ARL.
Year
Venue
Field
2018
arXiv: Learning
Monte Carlo tree search,Thompson sampling,Heuristics,If and only if,Artificial intelligence,Exploration behaviour,Scaling,Mathematics,Machine learning,Bayes' theorem,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1803.04926
1
PageRank 
References 
Authors
0.34
19
2
Name
Order
Citations
PageRank
Sebastian Schulze110.68
Evans, Owain27310.51