Title
Understanding The Role Of Individual Units In A Deep Neural Network
Abstract
Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
Year
DOI
Venue
2020
10.1073/pnas.1907375117
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
Keywords
DocType
Volume
machine learning, deep networks, computer vision
Journal
117
Issue
ISSN
Citations 
48
0027-8424
5
PageRank 
References 
Authors
0.45
38
6
Name
Order
Citations
PageRank
David Bau11499.18
Junyan Zhu293638.21
Hendrik Strobelt338721.65
Àgata Lapedriza470325.99
Bolei Zhou5152966.96
Antonio Torralba614607956.27