Title
Adversarial Perturbations Against Real-Time Video Classification Systems.
Abstract
Recent research has demonstrated the brittleness of machine learning systems to adversarial perturbations. However, the studies have been mostly limited to perturbations on images and more generally, classification that does not deal with temporally varying inputs. In this paper we ask Are adversarial perturbations possible in real-time video classification systems and if so, what properties must they satisfy? Such systems find application in surveillance applications, smart vehicles, and smart elderly care and thus, misclassification could be particularly harmful (e.g., a mishap at an elderly care facility may be missed). We show that accounting for temporal structure is key to generating adversarial examples in such systems. We exploit recent advances in generative adversarial network (GAN) architectures to account for temporal correlations and generate adversarial samples that can cause misclassification rates of over 80% for targeted activities. More importantly, the samples also leave other activities largely unaffected making them extremely stealthy. Finally, we also surprisingly find that in many scenarios, the same perturbation can be applied to every frame in a video clip that makes the adversaryu0027s ability to achieve misclassification relatively easy.
Year
DOI
Venue
2018
10.14722/ndss.2019.23202
arXiv: Learning
Field
DocType
Volume
Generative adversarial network,Ask price,Exploit,Artificial intelligence,Adversary,Machine learning,Mathematics,Perturbation (astronomy),Adversarial system
Journal
abs/1807.00458
ISSN
Citations 
PageRank 
Network and Distributed Systems Security (NDSS) Symposium 2019 24-27 February 2019, San Diego, CA, USA
6
0.46
References 
Authors
19
7
Name
Order
Citations
PageRank
Shasha Li182.18
Ajaya Neupane2526.70
Sujoy Paul3757.66
Chengyu Song441230.15
Srikanth V. Krishnamurthy564561.55
Amit K. Roy Chowdhury6115373.96
Swami, A.75105566.62