Title
BN-invariant Sharpness Regularizes the Training Model to Better Generalization.
Abstract
It is arguably believed that flatter minima can generalize better. However, it has been pointed out that the usual definitions of sharpness, which consider either the maxima or the integral of loss over a $\delta$ ball of parameters around minima, cannot give consistent measurement for scale invariant neural networks, e.g., networks with batch normalization layer. In this paper, we first propose a measure of sharpness, BN-Sharpness, which gives consistent value for equivalent networks under BN. It achieves the property of scale invariance by connecting the integral diameter with the scale of parameter. Then we present a computation-efficient way to calculate the BN-sharpness approximately i.e., one dimensional integral along the "sharpest" direction. Furthermore, we use the BN-sharpness to regularize the training and design an algorithm to minimize the new regularized objective. Our algorithm achieves considerably better performance than vanilla SGD over various experiment settings.
Year
DOI
Venue
2019
10.24963/ijcai.2019/578
IJCAI
Field
DocType
ISSN
Discrete mathematics,Algebra,Computer science,Invariant (mathematics)
Conference
Published in IJCAI2019
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Mingyang Yi101.01
Huishuai Zhang23412.56
Wei Chen316614.55
Zhi-Ming Ma422718.26
Tie-yan Liu54662256.32