Title
Meaningful Explanations of Black Box AI Decision Systems
Abstract
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.
Year
DOI
Venue
2019
10.1609/aaai.v33i01.33019780
AAAI
Field
DocType
Volume
Black box (phreaking),Transparency (graphic),Audit,Ranking,Inference,Computer science,Prejudice,Natural language processing,Artificial intelligence,Black box,Rule of inference,Machine learning
Conference
33
Citations 
PageRank 
References 
2
0.41
0
Authors
6
Name
Order
Citations
PageRank
Dino Pedreschi13083244.47
Fosca Giannotti22948253.39
Riccardo Guidotti311224.81
Anna Monreale458142.49
Salvatore Ruggieri551868.63
Franco Turini6842101.81