Title
Evaluation of Similarity-based Explanations
Abstract
Explaining predictions made by complex machine learning models helps users understand and accept the predicted outputs with confidence. One way to achieve this goal is to use similarity-based explanation that provides similar instances as evidence to support a model\u0027s prediction. Several \\emph{relevance metrics} are used for this purpose. In this study, we investigate which relevance metric can provide a reasonable explanation to users. Specifically, we adopted three tests to evaluate whether the relevance metrics satisfy the minimal requirements for similarity-based explanation. Our experiments reveal that the cosine similarity of the gradients of the loss performs best, which would be a recommended choice in practice. In addition, we show that some metrics performs poorly in our tests, and analyze the reasons of their failure. We expect our insights to help practitioners to select appropriate relevance metrics, and also to help further researches for designing better relevance metrics for explanations.
Year
Venue
DocType
2021
ICLR
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Kazuaki Hanawa134.52
Sho Yokoi202.37
Satoshi Hara36112.40
Kentaro Inui41008120.35