Abstract | ||
---|---|---|
ABSTRACTWe found that many of the reported erroneous cases in popular DNN image classifiers occur because the trained models confuse one class with another or show biases towards some classes over others. Most existing DNN testing techniques focus on per-image violations, so fail to detect class-level confusions or biases. We developed a testing technique to automatically detect class-based confusion and bias errors in DNN-driven image classification software. We evaluated our implementation, DeepInspect, on several popular image classifiers with precision up to 100% (avg. 72.6%) for confusion errors, and up to 84.3% (avg. 66.8%) for bias errors. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1145/3377812.3390799 | International Conference on Software Engineering |
Keywords | DocType | ISSN |
whitebox testing, deep learning, DNNs, image classifiers, bias | Conference | 0270-5257 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yuchi Tian | 1 | 2 | 2.38 |
Ziyuan Zhong | 2 | 3 | 2.73 |
Vicente Ordonez | 3 | 1418 | 69.65 |
Gail E. Kaiser | 4 | 2 | 1.37 |
Baishakhi Ray | 5 | 737 | 34.84 |