Title
Human-grounded Evaluations of Explanation Methods for Text Classification
Abstract
Due to the black-box nature of deep learning models, methods for explaining the models' results are crucial to gain trust from humans and support collaboration between AIs and humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2) justifying model predictions, and (3) helping humans investigate uncertain predictions. The results highlight dissimilar qualities of the various explanation methods we consider and show the degree to which these methods could serve for each purpose.
Year
DOI
Venue
2019
10.18653/v1/D19-1523
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
D19-1
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Piyawat Lertvittayakumjorn123.06
Francesca Toni234327.02