Title
Doctoral Consortium of WSDM'22: Exploring the Bias of Adversarial Defenses
Abstract
ABSTRACTDeep neural networks (DNNs) have achieved extraordinary accomplishments on various machine learning tasks. However, the existence of adversarial attacks still raise great concerns when they are adopted to safety-critical tasks. As countermeasures to protect DNN models against adversarial attacks, there are various defense strategies proposed. However, we find that the robustness ("safety'') provided by the robust training algorithms usually result unequal performance either among classes or sub-populations across the whole data distribution. For example, the model can achieve extremely low accuracy / robustness on certain groups of data. As a result, the safety of the model is still under great threats. As a summary, our project is about to study the bias problems of robust trained neural networks from different perspectives, which aims to build eventually reliable and safe deep learning models. We propose to present our research works in the Doctoral Consortium in WSDM'22 and gain opportunities to share our contribution to the relate problems.
Year
DOI
Venue
2022
10.1145/3488560.3502215
WSDM
Keywords
DocType
Citations 
Deep Learning, Robustness, Adversarial Attack
Conference
0
PageRank 
References 
Authors
0.34
0
1
Name
Order
Citations
PageRank
Han Xu121.41