Title
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
Abstract
We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.
Year
Venue
DocType
2021
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139
Conference
Volume
ISSN
Citations 
139
2640-3498
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Harsha Nori100.34
Rich Caruana24503655.71
Bu, Zhiqi311.37
Judy Hanwen Shen451.45
Janardhan Kulkarni5283.34