Title
Invariant Causal Prediction for Block MDPs.
Abstract
Generalization across environments is critical to the successful application of reinforcement learning (RL) algorithms to real-world challenges. In this work we propose a method for learning state abstractions which generalize to novel observation distributions in the multi-environment RL setting. We prove that for certain classes of environments, this approach outputs, with high probability, a state abstraction corresponding to the causal feature set with respect to the return. We give empirical evidence that analogous methods for the nonlinear setting can also attain improved generalization over single- and multi-task baselines. Lastly, we provide bounds on model generalization error in the multi-environment setting, in the process showing a connection between causal variable identification and the state abstraction framework for MDPs.
Year
Venue
DocType
2020
ICML
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
8
Name
Order
Citations
PageRank
Amy Zhang1458.13
Clare Lyle200.34
Shagun Sodhani300.34
Angelos Filos401.69
M. Kwiatkowska51006.63
Joelle Pineau62857184.18
Gal, Yarin766537.30
Doina Precup82829221.83