Title
Robust Machine Learning Via Privacy/Rate-Distortion Theory
Abstract
Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints.
Year
DOI
Venue
2021
10.1109/ISIT45174.2021.9517751
2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT)
Keywords
DocType
Citations 
robust learning, adversarial examples, privacy
Conference
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ye Wang19221.29
shuchin aeron21057.43
Adnan Siraj Rakin3307.89
Toshiaki Koike-Akino461067.09
Pierre Moulin500.34