Title
Counterfactual Credit Assignment in Model-Free Reinforcement Learning
Abstract
Credit assignment in reinforcement learning is the problem of measuring an action's influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent's actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.
Year
Venue
DocType
2021
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139
Conference
Volume
ISSN
Citations 
139
2640-3498
0
PageRank 
References 
Authors
0.34
0
13
Name
Order
Citations
PageRank
Thomas Mesnard100.68
Theophane Weber215916.79
Fabio Viola32028.87
Shantanu Thakoor400.34
Alaa Saade500.34
Anna Harutyunyan6859.63
William Dabney727017.86
Tom Stepleton821.07
Nicolas Heess9176294.77
Arthur Guez102481100.43
Marcus Hutter111302132.09
Lars Buesing1224816.50
Rémi Munos132240157.06