Title
Quantized Stationary Control Policies in Markov Decision Processes.
Abstract
For a large class of Markov Decision Processes, stationary (possibly randomized) policies are globally optimal. However, in Borel state and action spaces, the computation and implementation of even such stationary policies are known to be prohibitive. In addition, networked control applications require remote controllers to transmit action commands to an actuator with low information rate. These two problems motivate the study of approximating optimal policies by quantized (discretized) policies. To this end, we introduce deterministic stationary quantizer policies and show that such policies can approximate optimal deterministic stationary policies with arbitrary precision under mild technical conditions, thus demonstrating that one can search for $\varepsilon$-optimal policies within the class of quantized control policies. We also derive explicit bounds on the approximation error in terms of the rate of the approximating quantizers. We extend all these approximation results to randomized policies. These findings pave the way toward applications in optimal design of networked control systems where controller actions need to be quantized, as well as for new computational methods for generating approximately optimal decision policies in general (Polish) state and action spaces for both discounted cost and average cost.
Year
Venue
Field
2013
CoRR
Discrete mathematics,Mathematical optimization,Control theory,Optimal decision,Control theory,Markov decision process,Average cost,Optimal design,Control system,Quantization (signal processing),Mathematics,Approximation error
DocType
Volume
Citations 
Journal
abs/1310.5770
2
PageRank 
References 
Authors
0.43
6
3
Name
Order
Citations
PageRank
Naci Saldi12910.27
Tamás Linder261768.20
Serdar Yüksel345753.31