Title
A geometric approach to find nondominated policies to imprecise reward MDPs
Abstract
Markov Decision Processes (MDPs) provide a mathematical framework for modelling decision-making of agents acting in stochastic environments, in which transitions probabilities model the environment dynamics and a reward function evaluates the agent's behaviour. Lately, however, special attention has been brought to the difficulty of modelling precisely the reward function, which has motivated research on MDP with imprecisely specified reward. Some of these works exploit the use of nondominated policies, which are optimal policies for some instantiation of the imprecise reward function. An algorithm that calculates nondominated policies is π Witness, and nondominated policies are used to take decision under the minimax regret evaluation. An interesting matter would be defining a small subset of nondominated policies so that the minimax regret can be calculated faster, but accurately. We modified π Witness to do so. We also present the π Hull algorithm to calculate nondominated policies adopting a geometric approach. Under the assumption that reward functions are linearly defined on a set of features, we show empirically that pHull can be faster than our modified version of π Witness.
Year
DOI
Venue
2011
10.1007/978-3-642-23780-5_38
ECML/PKDD (1)
Keywords
Field
DocType
markov decision processes,reward mdps,imprecisely specified reward,minimax regret evaluation,geometric approach,hull algorithm,imprecise reward function,nondominated policy,modelling decision-making,minimax regret,reward function,modified version
Preference elicitation,Mathematical optimization,Regret,Markov decision process,Witness,Exploit,Artificial intelligence,Reward-based selection,Hull,Mathematics
Conference
Volume
ISSN
Citations 
6911
0302-9743
0
PageRank 
References 
Authors
0.34
11
2
Name
Order
Citations
PageRank
Valdinei Freire da Silva1256.86
Anna Helena Reali Costa219231.97