Title
Testing DNN image classifiers for confusion & bias errors
Abstract
ABSTRACTWe found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others. Most existing DNN testing techniques focus on per-image violations, so fail to detect class-level confusions or biases. We developed a testing technique to automatically detect class-based confusion and bias errors in DNN-driven image classification software. We evaluated our implementation, DeepInspect, on several popular image classifiers with precision up to 100% (avg. 72.6%) for confusion errors, and up to 84.3% (avg. 66.8%) for bias errors.
Year
DOI
Venue
2020
10.1145/3377812.3390799
International Conference on Software Engineering
Keywords
DocType
ISSN
whitebox testing, deep learning, DNNs, image classifiers, bias
Conference
0270-5257
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Yuchi Tian122.38
Ziyuan Zhong232.73
Vicente Ordonez3141869.65
Gail E. Kaiser421.37
Baishakhi Ray573734.84