Title
Testing DNN image classifiers for confusion & bias errors
Abstract
ABSTRACTImage classifiers are an important component of today's software, from consumer and business applications to safety-critical domains. The advent of Deep Neural Networks (DNNs) is the key catalyst behind such wide-spread success. However, wide adoption comes with serious concerns about the robustness of software systems dependent on DNNs for image classification, as several severe erroneous behaviors have been reported under sensitive and critical circumstances. We argue that developers need to rigorously test their software's image classifiers and delay deployment until acceptable. We present an approach to testing image classifier robustness based on class property violations. We found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others. These bugs usually violate some class properties of one or more of those classes. Most DNN testing techniques focus on perimage violations, so fail to detect class-level confusions or biases. We developed a testing technique to automatically detect class-based confusion and bias errors in DNN-driven image classification software. We evaluated our implementation, DeepInspect, on several popular image classifiers with precision up to 100% (avg. 72.6%) for confusion errors, and up to 84.3% (avg. 66.8%) for bias errors. DeepInspect found hundreds of classification mistakes in widely-used models, many exposing errors indicating confusion or bias.
Year
DOI
Venue
2020
10.1145/3377811.3380400
International Conference on Software Engineering
Keywords
DocType
ISSN
whitebox testing, deep learning, DNNs, image classifiers, bias
Conference
0270-5257
Citations 
PageRank 
References 
2
0.36
0
Authors
5
Name
Order
Citations
PageRank
Yuchi Tian122.38
Ziyuan Zhong232.73
Vicente Ordonez3141869.65
Gail E. Kaiser421.37
Baishakhi Ray573734.84