Abstract | ||
---|---|---|
Inverse reinforcement learning (IRL) attempts to infer human rewards or preferences from observed behavior. Since human planning systematically deviates from rationality, several approaches have been tried to account for specific human shortcomings. However, the general problem of inferring the reward function of an agent of unknown rationality has received little attention. Unlike the well-known ambiguity problems in IRL, this one is practically relevant but cannot be resolved by observing the agentu0027s policy in enough environments. This paper shows (1) that a No Free Lunch result implies it is impossible to uniquely decompose a policy into a planning algorithm and reward function, and (2) that even with a reasonable simplicity prior/Occamu0027s razor on the set of decompositions, we cannot distinguish between the true decomposition and others that lead to high regret. To address this, we need simple `normativeu0027 assumptions, which cannot be deduced exclusively from observations. |
Year | Venue | Keywords |
---|---|---|
2018 | NeurIPS | reward function,inverse reinforcement learning,no free lunch,planning algorithm,occam's razor |
Field | DocType | Citations |
Mathematical optimization,Mathematical economics,Rationality,Regret,Computer science,No free lunch in search and optimization,Inverse reinforcement learning,occam,Irrational number,Occam's razor,Ambiguity | Conference | 1 |
PageRank | References | Authors |
0.36 | 22 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Stuart Armstrong | 1 | 27 | 4.54 |
Sören Mindermann | 2 | 1 | 2.38 |