Title
A Tour Of Convolutional Networks Guided By Linear Interpreters
Abstract
Convolutional networks are large linear systems divided into layers and connected by non-linear units. These units are the "articulations" that allow the network to adapt to the input. To understand how a network manages to solve a problem we must look at the articulated decisions in entirety. If we could capture the actions of non-linear units for a particular input, we would be able to replay the whole system back and forth as if it was always linear. It would also reveal the actions of non-linearities because the resulting linear system, a Linear Interpreter, depends on the input image. We introduce a hooking layer, called a LinearScope, which allows us to run the network and the linear interpreter in parallel. Its implementation is simple, flexible and efficient. From here we can make many curious inquiries: how do these linear systems look like? When the rows and columns of the transformation matrix are images, how do they look like? What type of basis do these linear transformations rely on? The answers depend on the problems presented, through which we take a tour to some popular architectures used for classification, super-resolution (SR) and image-to-image translation (I2I). For classification we observe that popular networks use a pixel-wise vote per class strategy and heavily rely on bias parameters. For SR and I2I we find that CNNs use wavelet-type basis similar to the human visual system. For I2I we reveal copy-move and template-creation strategies to generate outputs.
Year
DOI
Venue
2019
10.1109/ICCV.2019.00485
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)
Field
DocType
Volume
Computer vision,Computer science,Interpreter,Human–computer interaction,Artificial intelligence
Conference
2019
Issue
ISSN
Citations 
1
1550-5499
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Pablo Navarrete Michelini1124.45
Hanwen Liu200.68
Yunhua Lu300.34
Xingqun Jiang411.02