Abstract | ||
---|---|---|
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this constraint. In this work we go beyond both the expectation and threshold approaches and consider a guaranteed payoff optimization (GPO) problem for POMDPs, where we are given a $t$ and the objective is to find a policy $sigma$ such that a) each possible outcome of $sigma$ yields a discounted-sum payoff of at least $t$, and b) the expected discounted-sum payoff of $sigma$ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks. |
Year | Venue | Field |
---|---|---|
2016 | arXiv: Artificial Intelligence | Mathematical optimization,Computer science,Partially observable Markov decision process,Markov decision process,Technical report,Stochastic game |
DocType | Volume | Citations |
Journal | abs/1611.08696 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Krishnendu Chatterjee | 1 | 2179 | 162.09 |
Petr Novotný | 2 | 46 | 3.35 |
Guillermo A. Pérez | 3 | 10 | 9.32 |
Jean-François Raskin | 4 | 1735 | 100.15 |
Dorde Zikelic | 5 | 6 | 2.44 |