Title
Finite-Horizon Markov Decision Processes with State Constraints
Abstract
Markov Decision Processes (MDPs) have been used to formulate many decision-making problems in science and engineering. The objective is to synthesize the best decision (action selection) policies to maximize expected rewards (minimize costs) in a given stochastic dynamical environment. In many practical scenarios (multi-agent systems, telecommunication, queuing, etc.), the decision-making problem can have state constraints that must be satisfied, which leads to Constrained MDP (CMDP) problems. In the presence of such state constraints, the optimal policies can be very hard to characterize. This paper introduces a new approach for finding non-stationary randomized policies for finite-horizon CMDPs. An efficient algorithm based on Linear Programming (LP) and duality theory is proposed, which gives the convex set of feasible policies and ensures that the expected total reward is above a computable lower-bound. The resulting decision policy is a randomized policy, which is the projection of the unconstrained deterministic MDP policy on this convex set. To the best of our knowledge, this is the first result in state constrained MDPs to give an efficient algorithm for generating finite horizon randomized policies for CMDP with optimality guarantees. A simulation example of a swarm of autonomous agents running MDPs is also presented to demonstrate the proposed CMDP solution algorithm.
Year
Venue
Field
2015
CoRR
Autonomous agent,Mathematical optimization,Swarm behaviour,Duality (mathematics),Markov decision process,Convex set,Queueing theory,Linear programming,Action selection,Mathematics
DocType
Volume
Citations 
Journal
abs/1507.01585
1
PageRank 
References 
Authors
0.36
13
2
Name
Order
Citations
PageRank
Mahmoud El Chamie1427.18
behcet acikmese211.04