Title
Drop-Activation: Implicit Parameter Reduction And Harmonious Regularization
Abstract
Overfitting frequently occurs in deep learning. In this paper, we propose a novel regularization method called drop-activation to reduce overfitting and improve generalization. The key idea is to drop nonlinear activation functions by setting them to be identity functions randomly during training time. During testing, we use a deterministic network with a new activation function to encode the average effect of dropping activations randomly. Our theoretical analyses support the regularization effect of drop-activation as implicit parameter reduction and verify its capability to be used together with batch normalization (Ioffe and Szegedy in Batch normalization: accelerating deep network training by reducing internal covariate shift. , 2015). The experimental results on CIFAR10, CIFAR100, SVHN, EMNIST, and ImageNet show that drop-activation generally improves the performance of popular neural network architectures for the image classification task. Furthermore, as a regularizer drop-activation can be used in harmony with standard training and regularization techniques such as batch normalization and AutoAugment (Cubuk et al. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 113-123, 2019). The code is available at https://github.com/LeungSamWai/Drop-Activation.
Year
DOI
Venue
2018
10.1007/s42967-020-00085-3
COMMUNICATIONS ON APPLIED MATHEMATICS AND COMPUTATION
Keywords
DocType
Volume
Deep learning, Image classification, Overfitting, Regularization
Journal
3
Issue
ISSN
Citations 
2
2096-6385
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Senwei Liang102.37
Yuehaw Khoo2326.04
Haizhao Yang34613.03