Title
Residual Connections Encourage Iterative Inference.
Abstract
Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem.
Year
Venue
Field
2017
international conference on learning representations
Iterative refinement,Residual,Computer science,Inference,Residual Blocks,Artificial intelligence,Deep learning,Overfitting,Residual neural network,Machine learning,Feature learning
DocType
Volume
Citations 
Journal
abs/1710.04773
10
PageRank 
References 
Authors
0.61
11
6
Name
Order
Citations
PageRank
Jastrzębski Stanisław113114.12
Devansh Arpit214614.24
nicolas ballas341828.91
Vikas Verma4425.09
tong che5806.13
Yoshua Bengio6426773039.83