Title
Gradient-Based Inverse Risk-Sensitive Reinforcement Learning
Abstract
We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. In particular, we model risk-sensitivity in a reinforcement learning framework by making use of models of human decision-making having their origins in behavioral psychology and economics. We propose a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. We demonstrate the performance of the proposed technique on two examples, the first of which is the canonical Grid World example and the second of which is an MDP modeling passengers' decisions regarding ride-sharing. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the MDP.
Year
Venue
Field
2017
2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC)
Inverse,Mathematical optimization,Markov process,Computer science,Markov decision process,Inverse reinforcement learning,Travel time,Cost accounting,Grid,Reinforcement learning
DocType
ISSN
Citations 
Conference
0743-1546
1
PageRank 
References 
Authors
0.36
0
4
Name
Order
Citations
PageRank
Eric Mazumdar1137.50
Lillian J. Ratliff28723.32
Tanner Fiez344.37
Shankar Sastry4119771291.58