Abstract | ||
---|---|---|
The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evalua... |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/TPAMI.2018.2858759 | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Keywords | Field | DocType |
Visualization,Detectors,Training,Image color analysis,Task analysis,Image segmentation,Semantics | Interpretability,Convolutional neural network,Computer science,Network architecture,Artificial intelligence,Black box,Machine learning,Deep neural networks | Journal |
Volume | Issue | ISSN |
41 | 9 | 0162-8828 |
Citations | PageRank | References |
19 | 0.77 | 25 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bolei Zhou | 1 | 1529 | 66.96 |
David Bau | 2 | 149 | 9.18 |
Aude Oliva | 3 | 5121 | 298.19 |
Antonio Torralba | 4 | 14607 | 956.27 |