Title
Counterfactual Normalization: Proactively Addressing Dataset Shift Using Causal Mechanisms
Abstract
Predictive models can fail to generalize from training to deployment environments because of dataset shift, posing a threat to model reliability in practice. As opposed to previous methods which use samples from the target distribution to reactively correct dataset shift, we propose using graphical knowledge of the causal mechanisms relating variables in a prediction problem to proactively remove variables that participate in spurious associations with the prediction target, allowing models to generalize across datasets. To accomplish this, we augment the causal graph with latent counterfactual variables that account for the underlying causal mechanisms, and show how we can estimate these variables. In our experiments we demonstrate that models using good estimates of the latent variables instead of the observed variables transfer better from training to target domains with minimal accuracy loss in the training domain.
Year
Venue
Field
2018
UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
Normalization (statistics),Computer science,Counterfactual thinking,Artificial intelligence,Machine learning
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Adarsh Subbaswamy112.38
Suchi Saria211.36