Title
Evaluating Feature Importance Estimates.
Abstract
Estimating the influence of a given feature to a model prediction is challenging. We introduce ROAR, RemOve And Retrain, a benchmark to evaluate the accuracy of interpretability methods that estimate input feature importance in deep neural networks. We remove a fraction of input features deemed to be most important according to each estimator and measure the change to the model accuracy upon retraining. The most accurate estimator will identify inputs as important whose removal causes the most damage to model performance relative to all other estimators. This evaluation produces thought-provoking results -- we find that several estimators are less accurate than a random assignment of feature importance. However, averaging a set of squared noisy estimators (a variant of a technique proposed by Smilkov et al. (2017)), leads to significant gains in accuracy for each method considered and far outperforms such a random guess.
Year
Venue
Field
2018
arXiv: Learning
Interpretability,Square (algebra),Random assignment,Artificial intelligence,Model prediction,Deep neural networks,Machine learning,Mathematics,Estimator
DocType
Volume
Citations 
Journal
abs/1806.10758
4
PageRank 
References 
Authors
0.39
33
4
Name
Order
Citations
PageRank
Sara Hooker1322.64
Dumitru Erhan23285201.19
Pieter-Jan Kindermans31296.39
Been Kim435321.44