Title
Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior.
Abstract
In decision support and classification systems, there is usually the necessity that operators or experts provide class labels for a significant number of process samples in order to be able to establish reliable machine learning classifiers. Such labels are often affected by significant uncertainty and inconsistency due to varying human’s experience and constitutions during the labeling process. This typically results in significant, unintended class overlaps. We propose several new concepts for providing enhanced explanations of classifier decisions in linguistic (human readable) form. These are intended to help operators to better understand the decision process and support them during sample annotation to improve their certainty and consistency in successive labeling cycles. This is expected to lead to better, more consistent data sets (streams) for use in training and updating classifiers. The enhanced explanations are composed of (1) grounded reasons for classification decisions, represented as linguistically readable fuzzy rules, (2) a classifier’s level of uncertainty in relation to its decisions and possible alternative suggestions, (3) the degree of novelty of current samples and (4) the levels of impact of the input features on the current classification response. The last of these are based on a newly developed approach for eliciting instance-based feature importance levels, and are also used to reduce the lengths of the rules to a maximum of 3 to 4 antecedent parts to ensure readability for operators and users. The proposed techniques were embedded within an annotation GUI and applied to a real-world application scenario from the field of visual inspection. The usefulness of the proposed linguistic explanations was evaluated based on experiments conducted with six operators. The results indicate that there is approximately an 80% chance that operator/user labeling behavior improves significantly when enhanced linguistic explanations are provided, whereas this chance drops to 10% when only the classifier responses are shown.
Year
DOI
Venue
2017
10.1016/j.ins.2017.08.012
Information Sciences
Keywords
Field
DocType
Linguistic explanation of classifier decisions,Operators’ Labeling behavior,Classification reasons,Transparent fuzzy rules,Classifier certainty,Degree of novelty,Instance-based feature importance levels
Visual inspection,Annotation,Decision support system,Fuzzy logic,Readability,Artificial intelligence,Operator (computer programming),Novelty,Classifier (linguistics),Mathematics,Machine learning
Journal
Volume
Issue
ISSN
420
C
0020-0255
Citations 
PageRank 
References 
6
0.45
21
Authors
6
Name
Order
Citations
PageRank
Edwin Lughofer1275.02
Roland Richter260.45
Ulrich Neissl381.15
Wolfgang Heidl41037.01
Christian Eitzinger516415.33
Thomas Radauer6664.94