Title
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions.
Abstract
Explaining predictions of deep neural networks (DNNs) is an important and nontrivial task. In this paper, we propose a practical approach to interpret decisions made by a DNN object detector that has fidelity comparable to state-of-the-art methods and sufficient computational efficiency to process large datasets. Our method relies on recent theory and approximates Shapley feature importance values. We qualitatively and quantitatively show that the proposed explanation method can be used to find image features which cause failures in DNN object detection. The developed software tool combined into the Explain to Fix (E2X) framework has a factor of 10 higher computational efficiency than prior methods and can be used for cluster processing using graphics processing units (GPUs). Lastly, we propose a potential extension of the E2X framework where the discovered missing features can be added into training dataset to overcome failures after model retraining.
Year
Venue
DocType
2018
arXiv: Computer Vision and Pattern Recognition
Journal
Volume
Citations 
PageRank 
abs/1811.08011
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Denis A. Gudovskiy162.44
Alec Hodgkinson230.72
Takuya Yamaguchi301.69
Yasunori Ishii400.34
Sotaro Tsukizawa530.72