Title
Quantifying DNN Model Robustness to the Real-World Threats
Abstract
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Year
DOI
Venue
2020
10.1109/DSN48063.2020.00033
2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)
Keywords
DocType
ISSN
neural networks,adversarial example,robustness,threat severity,safety
Conference
1530-0889
ISBN
Citations 
PageRank 
978-1-7281-5810-5
0
0.34
References 
Authors
6
3
Name
Order
Citations
PageRank
Zhenyu Zhong100.34
Zhisheng Hu273.86
Xiaowei Chen300.34