Title
Finite Model Approximations for Partially Observed Markov Decision Processes with Discounted Cost.
Abstract
We consider finite model approximations of discrete-time partially observed Markov decision processes (POMDPs) under the discounted cost criterion. After converting the original partially observed stochastic control problem to a fully observed one on the belief space, the finite models are obtained through the uniform quantization of the state and action spaces of the belief space Markov decision process (MDP). Under mild assumptions on the components of the original model, it is established that the policies obtained from these finite models are nearly optimal for the belief space MDP, and so, for the original partially observed problem. The assumptions essentially require that the belief space MDP satisfies a mild weak continuity condition. We provide examples and introduce explicit approximation procedures for the quantization of the set of probability measures on the state space of POMDP (i.e., belief space).
Year
Venue
Field
2017
arXiv: Systems and Control
Mathematical optimization,Markov model,Partially observable Markov decision process,Probability measure,Markov decision process,Variable-order Markov model,Markov kernel,State space,Mathematics,Stochastic control
DocType
Volume
Citations 
Journal
abs/1710.07009
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Naci Saldi101.01
Serdar Yüksel245753.31
Tamás Linder361768.20