Title
Computing Monotone Policies For Markov Decision Processes: A Nearly-Isotonic Penalty Approach
Abstract
This paper discusses algorithms for solving Markov decision processes (MDPs) that have monotone optimal policies. We propose a two-stage alternating convex optimization scheme that can accelerate the search for an optimal policy by exploiting the monotone property. The first stage is a linear program formulated in terms of the joint state-action probabilities. The second stage is a regularized problem formulated in terms of the conditional probabilities of actions given states. The regularization uses techniques from nearly-isotonic regression. While a variety of iterative method can be used in the first formulation of the problem, we show in numerical simulations that, in particular, the alternating method of multipliers (ADMM) can be significantly accelerated using the regularization step. (C) 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Year
DOI
Venue
2017
10.1016/j.ifacol.2017.08.1575
IFAC PAPERSONLINE
Keywords
Field
DocType
stochastic control, Markov decision process (MDP), l(1)-regularization, sparsity, monotone policy, alternating direction method of multipliers (ADMM), isotonic regression
Mathematical optimization,Iterative method,Markov model,Partially observable Markov decision process,Computer science,Markov decision process,Theoretical computer science,Regularization (mathematics),Linear programming,Convex optimization,Monotone polygon
Journal
Volume
Issue
ISSN
50
1
2405-8963
Citations 
PageRank 
References 
0
0.34
3
Authors
4
Name
Order
Citations
PageRank
Robert Mattila143.51
Cristian R. Rojas225243.97
Vikram Krishnamurthy3925162.74
Bo Wahlberg421040.68