Title
Why should you trust my interpretation? Understanding uncertainty in LIME predictions.
Abstract
Methods for interpreting machine learning black-box models increase the outcomesu0027 transparency and in turn generates insight into the reliability and fairness of the algorithms. However, the interpretations themselves could contain significant uncertainty that undermines the trust in the outcomes and raises concern about the modelu0027s reliability. Focusing on the method Local Interpretable Model-agnostic Explanations (LIME), we demonstrate the presence of two sources of uncertainty, namely the randomness in its sampling procedure and the variation of interpretation quality across different input data points. Such uncertainty is present even in models with high training and test accuracy. We apply LIME to synthetic data and two public data sets, text classification in 20 Newsgroup and recidivism risk-scoring in COMPAS, to support our argument.
Year
Venue
DocType
2019
arXiv: Learning
Journal
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Hui Fen100.34
Kuangyan Song200.34
Madeilene Udell300.34
Yiming Sun422.11
Yujia Zhang532.41