Title
Inverse POMDP: Inferring What You Think from What You Do.
Abstract
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning. Inferring the internal model is a crucial ingredient for interpreting neural activities of agents and is beneficial for imitation learning. Here we describe a method to infer an agentu0027s internal model and dynamic beliefs, and apply it to a simulated agent performing a foraging task. We assume the agent behaves rationally according to their understanding of the task and the relevant causal variables that cannot be fully observed. We model this rational solution as a Partially Observable Markov Decision Process (POMDP). However, we allow that the agent may have wrong assumptions about the task, and our method learns these assumptions from the agentu0027s actions. Given the agentu0027s sensory observations and actions, we learn its internal model by maximum likelihood estimation over a set of task-relevant parameters. The Markov property of the POMDP enables us to characterize the transition probabilities between internal states and iteratively estimate the agentu0027s policy using a constrained Expectation-Maximization(EM) algorithm. We validate our method on simulated agents performing suboptimally on a foraging task, and successfully recover the agentu0027s actual model.
Year
Venue
Field
2018
arXiv: Learning
Inverse,Markov property,Partially observable Markov decision process,Maximum likelihood,Artificial intelligence,Imitation learning,Machine learning,Foraging,Internal model,Mathematics
DocType
Volume
Citations 
Journal
abs/1805.09864
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
zhengwei wu1416.60
Paul R. Schrater214122.71
Xaq Pitkow396.84