Title
Making Deep Neural Networks Right For The Right Scientific Reasons By Interacting With Their Explanations
Abstract
Deep learning approaches can show excellent performance but still have limited practical use if they learn to predict based on confounding factors in a dataset, for instance text labels in the corner of images. By using an explanatory interactive learning approach, with a human expert in the loop during training, it becomes possible to avoid predictions based on confounding factors.Deep neural networks have demonstrated excellent performances in many real-world applications. Unfortunately, they may show Clever Hans-like behaviour (making use of confounding factors within datasets) to achieve high performance. In this work we introduce the novel learning setting of explanatory interactive learning and illustrate its benefits on a plant phenotyping research task. Explanatory interactive learning adds the scientist into the training loop, who interactively revises the original model by providing feedback on its explanations. Our experimental results demonstrate that explanatory interactive learning can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust in the underlying model.
Year
DOI
Venue
2020
10.1038/s42256-020-0212-3
NATURE MACHINE INTELLIGENCE
DocType
Volume
Issue
Journal
2
8
Citations 
PageRank 
References 
4
0.53
0
Authors
9