Title
REINFORCEMENT LEARNING FOR POMDP USING STATE CLASSIFICATION
Abstract
Reinforcement learning (RL) has been widely used to solve problems with a little feedback from environment. Q learning can solve Markov decision processes (MDPs) quite well. For partially observable Markov decision processes (POMDPs), a recurrent neural network (RNN) can be used to approximate Q values. However, learning time for these problems is typically very long. We present a new combination of RL and RNN to find a good policy for POMDPs in a shorter learning time. This method contains two phases: firstly, state space is divided into two groups (fully observable state group and hidden state group); secondly, a Q value table is used to store values of fully observable states and an RNN is used to approximate values for hidden states. Results of experiments in two grid world problems show that the proposed method enables an agent to acquire a policy with better learning performance compared to the method using only a RNN.
Year
DOI
Venue
2007
10.1080/08839510802170538
Applied Artificial Intelligence
Keywords
DocType
Volume
approximate q value,q value table,q learning,shorter learning time,pomdp using state classification,observable state,hidden state group,reinforcement learning,observable markov decision process,hidden state,better learning performance,recurrent neural network,state space,markov decision process
Conference
22
Issue
ISSN
Citations 
7-8
0883-9514
5
PageRank 
References 
Authors
0.44
12
3
Name
Order
Citations
PageRank
Le Tien Dung1101.67
Takashi Komeda2146.47
Motoki Takagi384.21