Abstract | ||
---|---|---|
Estimating heterogeneous treatment effects in domains such as healthcare or social science often involves sensitive data where protecting privacy is important. We introduce a general meta-algorithm for estimating conditional average treatment effects (CATE) with differential privacy (DP) guarantees. Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner. We perform a tight privacy analysis by taking advantage of sample splitting in our meta-algorithm and the parallel composition property of differential privacy. In this paper, we implement our approach using DP-EBMs as the base learner. DP-EBMs are interpretable, high-accuracy models with privacy guarantees, which allow us to directly observe the impact of DP noise on the learned causal model. Our experiments show that multi-stage CATE estimators incur larger accuracy loss than single-stage CATE or ATE estimators and that most of the accuracy loss from differential privacy is due to an increase in variance, not biased estimates of treatment effects. |
Year | Venue | DocType |
---|---|---|
2022 | Conference on Causal Learning and Reasoning (CLeaR) | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Fengshi Niu | 1 | 0 | 0.34 |
Harsha Nori | 2 | 4 | 2.79 |
Brian Quistorff | 3 | 0 | 0.68 |
Rich Caruana | 4 | 4503 | 655.71 |
Donald Ngwe | 5 | 0 | 0.34 |
Aadharsh Kannan | 6 | 0 | 0.34 |