Title
Limits of Multi-Discounted Markov Decision Processes
Abstract
Markov decision processes (MDPs) are controllable discrete event systems with stochastic transitions. The payoff received by the controller can be evaluated in different ways, depending on the payoff function the MDP is equipped with. For example a mean-payoff function evaluates average performance, whereas a discounted payoff function gives more weights to earlier performance by means of a discount factor. Another well-known example is the parity payoff function which is used to encode logical specifications [14]. Surprisingly, parity and mean-payoff MDPs share two non-trivial properties: they both have pure stationary optimal strategies [4, 15] and they both are approximable by discounted MDPs with multiple discount factors (multi- discounted MDPs) [5, 15]. In this paper we unify and generalize these results. We introduce a new class of payoff functions called the priority weighted payoff functions, which are generalization of both parity and mean-payoff functions. We prove that priority weighted MDPs admit optimal strategies that are pure and stationary, and that the priority weighted value of an MDP is the limit of the multi-discounted value when discount factors tend to 0 simultaneously at various speeds.
Year
DOI
Venue
2007
10.1109/LICS.2007.28
Wroclaw
Keywords
Field
DocType
Markov processes,discrete event systems,discrete event systems,multidiscounted Markov decision processes,payoff function,stationary optimal strategies,stochastic transitions
Convergence (routing),Discrete mathematics,Mathematical optimization,Markov process,Optimal control,Discounting,Computer science,Markov decision process,Stochastic process,Parity (mathematics),Stochastic game
Conference
ISSN
ISBN
Citations 
1043-6871
0-7695-2908-9
6
PageRank 
References 
Authors
0.66
6
2
Name
Order
Citations
PageRank
Hugo Gimbert124921.31
Wieslaw Zielonka261438.95