Title
On the Robustness to Adversarial Examples of Neural ODE Image Classifiers
Abstract
The vulnerability of deep neural networks to adversarial attacks currently represents one of the most challenging open problems in the deep learning field. The NeurIPS 2018 work that obtained the best paper award proposed a new paradigm for defining deep neural networks with continuous internal activations. In this kind of networks, dubbed Neural ODE Networks, a continuous hidden state can be defined via parametric ordinary differential equations, and its dynamics can be adjusted to build representations for a given task, such as image classification. In this paper, we analyze the robustness of image classifiers implemented as ODE Nets to adversarial attacks and compare it to standard deep models. We show that Neural ODE are natively more robust to adversarial attacks with respect to state-of-the-art residual networks, and some of their intrinsic properties, such as adaptive computation cost, open new directions to further increase the robustness of deep-learned models. Moreover, thanks to the continuity of the hidden state, we are able to follow the perturbation injected by manipulated inputs and pinpoint the part of the internal dynamics that is most responsible for the misclassification.
Year
DOI
Venue
2019
10.1109/WIFS47025.2019.9035109
2019 IEEE International Workshop on Information Forensics and Security (WIFS)
Keywords
DocType
ISSN
adversarial examples,Neural ODE image classifiers,deep neural networks,adversarial attacks,deep learning field,continuous internal activations,continuous hidden state,parametric ordinary differential equations,image classification,ODE Nets,standard deep models,deep-learned models,residual networks,neural ODE networks
Conference
2157-4766
ISBN
Citations 
PageRank 
978-1-7281-3218-1
0
0.34
References 
Authors
7
4
Name
Order
Citations
PageRank
Fabio Carrara1298.17
Roberto Caldelli248137.01
Fabrizio Falchi345955.65
Giuseppe Amato4505106.68