Title
Adversarial Manipulation of Deep Representations
Abstract
We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves.
Year
Venue
Field
2015
international conference on learning representations
Pattern recognition,Computer science,Artificial intelligence,Phenomenon,Adversary,Artificial neural network,Machine learning,Adversarial system
DocType
Volume
Citations 
Journal
abs/1511.05122
38
PageRank 
References 
Authors
2.20
5
4
Name
Order
Citations
PageRank
S. Sabour1926.55
Cao, Yanshuai2627.43
Fartash Faghri3613.88
David J. Fleet45236550.74