Title
Representer Point Selection for Explaining Deep Neural Networks.
Abstract
We propose to explain the predictions of a deep neural network, by pointing to the set of what we call representer points in the training set, for a given test point prediction. Specifically, we show that we can decompose the pre-activation prediction of a neural network into a linear combination of activations of training points, with the weights corresponding to what we call representer values, which thus capture the importance of that training point on the learned parameters of the network. But it provides a deeper understanding of the network than simply training point influence: with positive representer values corresponding to excitatory training points, and negative values corresponding to inhibitory points, which as we show provides considerably more insight. Our method is also much more scalable, allowing for real-time feedback in a manner not feasible with influence functions.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
deep neural networks,neural network,deep neural network,training set,deeper understanding,linear combination
DocType
Volume
ISSN
Conference
31
1049-5258
Citations 
PageRank 
References 
2
0.36
0
Authors
4
Name
Order
Citations
PageRank
Chih-Kuan Yeh1193.34
Joon Sik Kim232.40
Ian En-Hsu Yen3849.56
Pradeep D. Ravikumar42185155.99