Title
Efficient learning in linearly solvable MDP models
Abstract
Linearly solvable Markov Decision Process (MDP) models are a powerful subclass of problems with a simple structure that allow the policy to be written directly in terms of the uncontrolled (passive) dynamics of the environment and the goals of the agent. However, there have been no learning algorithms for this class of models. In this research, we develop a robust learning approach to linearly solvable MDPs. To exploit the simple solution for general problems, we show how to construct passive dynamics from any transition matrix, use Bayesian updating to estimate the model parameters and apply approximate and efficient Bayesian exploration to speed learning. In addition, we reduce the computational cost of learning using intermittent Bayesian updating and policy solving. We also gave a polynomial theoretical time complexity bound for the convergence of our learning algorithm, and demonstrate a linear bound for the subclass of the reinforcement learning problems with the property that the transition error depends only on the agent itself. Test results for our algorithm in a grid world are presented, comparing our algorithm with the BEB algorithm. The results showed that our algorithm learned more than the BEB algorithm without losing convergence speed, so that the advantage of our algorithm increased as the environment got more complex. We also showed that our algorithm's performance is more stable after convergence. Finally, we show how to apply our approach to the Cellular Telephones problem by defining the passive dynamics.
Year
Venue
Keywords
2013
IJCAI
convergence speed,efficient bayesian exploration,robust learning approach,solvable mdps,powerful subclass,beb algorithm,intermittent bayesian,passive dynamic,simple solution,simple structure,linearly solvable mdp model,efficient learning,computer science
Field
DocType
Citations 
Convergence (routing),Bayesian inference,Polynomial,Computer science,Markov decision process,Artificial intelligence,Time complexity,Machine learning,Grid,Reinforcement learning,Bayesian probability
Conference
1
PageRank 
References 
Authors
0.36
7
2
Name
Order
Citations
PageRank
Ang Li110.36
Paul R. Schrater214122.71