Title
Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes
Abstract
In this paper we address a basic problem that arises naturally in average-reward Markov decision processes with constraints and/or nonstandard payoff criteria: Given a feasible state-action frequency vector “the target”, construct a policy whose state-action frequencies match those of the target vector. While it is well known that the solution to this problem cannot, in general, be found in the space of stationary randomized policies, we construct a solution that has “ultimately stationary” structure: It consists of two stationary policies where the first one is used initially, and then the switch to the second one is made at a certain random switching time. The computational effort required to construct this solution is minimal. We also show that our problem can always be solved by a stationary policy if the original MDP is “extended” by adding certain states and actions. The solution in the original MDP is obtained by mapping the solution in the extended MDP back to the original process.
Year
DOI
Venue
2002
10.1287/moor.27.3.545.316
Math. Oper. Res.
Keywords
DocType
Volume
achieving target state-action frequencies,markov decision processes,markov decision processes with nonstandard reward criteria,average reward criterion,constrained markov decision processes,basic problem,original process,certain state,state-action frequencies,stationary policy,feasible state-action frequency vector,original mdp,multichain average-reward markov decision,extended mdp,stationary randomized policy,state-action frequency,certain random switching time
Journal
27
Issue
ISSN
Citations 
3
0364-765X
2
PageRank 
References 
Authors
0.41
8
2
Name
Order
Citations
PageRank
Dmitry Krass148382.08
O. J. Vrieze24919.22