Abstract | ||
---|---|---|
Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ISIT45174.2021.9517751 | 2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT) |
Keywords | DocType | Citations |
robust learning, adversarial examples, privacy | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ye Wang | 1 | 92 | 21.29 |
shuchin aeron | 2 | 105 | 7.43 |
Adnan Siraj Rakin | 3 | 30 | 7.89 |
Toshiaki Koike-Akino | 4 | 610 | 67.09 |
Pierre Moulin | 5 | 0 | 0.34 |