Title
Machine learning explainability via microaggregation and shallow decision trees
Abstract
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian society, automated decisions should be explainable. This is not only a right of citizens, enshrined for example in the European General Data Protection Regulation, but a desirable goal for engineers, who want to know whether the decision algorithms are capturing the relevant features. For explainability to be scalable, it should be possible to derive explanations in a systematic way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited depth as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between the comprehensibility and the representativeness of the surrogate model on the one side and the privacy of the subjects used for training the black-box model on the other side.
Year
DOI
Venue
2020
10.1016/j.knosys.2020.105532
Knowledge-Based Systems
Keywords
DocType
Volume
Explainability,Machine learning,Data protection,Microaggregation,Privacy
Journal
194
ISSN
Citations 
PageRank 
0950-7051
1
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Alberto Blanco-Justicia1146.77
Josep Domingo26712.25
Sergio Martínez316713.34
David Sánchez469033.01