Title
ADVERSARIAL ATTACKS ON OBJECT DETECTORS WITH LIMITED PERTURBATIONS
Abstract
Deep convolutional neural networks are widely witnessed vulnerable to adversarial attacks. Recently, great progress has been achieved in attacking object detectors. However, current attacks neglect the practical utility and rely on global perturbations on the target image with a large number of patches or pixels. In this paper, we present a novel attack framework named DTTACK to fool both one-stage and two-stage object detectors with limited perturbations. A novel divergent patch shape consisting of four intersecting lines is proposed to effectively affect deep convolutional feature extraction with limited pixels. In particular, we introduce an instance-aware heat map as a self-attention module to help DTTACK focus on salient object areas, which further improves the attacking performance. Extensive experiments on PASCAL-VOC, MS-COCO, as well as an online detection system demonstrate that DTTACK surpasses the state-of-the-art methods by large margins.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9414125
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
Adversarial Attack, Object Detection, Adversarial Examples, Classifiers
Conference
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Zhenbo Shi102.03
Wei Yang228654.48
Zhenbo Xu334.77
Zhi Chen403.72
Yingjie Li501.01
Haoran Zhu600.34
Liusheng Huang747364.55