Title
Detecting Unusual Input To Neural Networks
Abstract
Evaluating a neural network on an input that differs markedly from the training data might cause erratic and flawed predictions. We study a method that judges the unusualness of an input by evaluating its informative content compared to the learned parameters. This technique can be used to judge whether a network is suitable for processing a certain input and to raise a red flag that unexpected behavior might lie ahead. We compare our approach to various methods for uncertainty evaluation from the literature for various datasets and scenarios. Specifically, we introduce a simple, effective method that allows to directly compare the output of such metrics for single input points even if these metrics live on different scales.
Year
DOI
Venue
2021
10.1007/s10489-020-01925-8
APPLIED INTELLIGENCE
Keywords
DocType
Volume
Deep learning, Trustworthiness, Fisher information, Uncertainty, Out-of-distribution
Journal
51
Issue
ISSN
Citations 
4
0924-669X
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Jörg Martin100.68
Clemens Elster29614.27