Title
Assistive Signals for Deep Neural Network Classifiers
Abstract
Deep Neural Networks are brittle in that small changes in the input can drastically affect their prediction outcome and confidence. Consequently, research in this area mainly focus on adversarial attacks and defenses. In this paper, we take an alternative stance and introduce the concept of Assistive Signals, which are perturbations optimized to improve a model's confidence score regardless if it's under attack or not. We analyze some interesting properties of these assistive perturbations and extend the idea to optimize them in the 3D space simulating different lighting conditions and viewing angles. Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. 'Assistive Signals' also illustrate bias of ML models towards certain patterns in real-life objects.
Year
DOI
Venue
2021
10.1109/CVPRW53098.2021.00133
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGITION WORKSHOPS (CVPRW 2021)
DocType
ISSN
Citations 
Conference
2160-7508
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Camilo Pestana101.69
Wei Liu200.34
David G. Glance300.34
Robyn A. Owens400.34
Ajmal Mian5587.53