Title
Dropout Training as Adaptive Regularization.
Abstract
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.
Year
Venue
DocType
2013
NIPS
Journal
Volume
Citations 
PageRank 
abs/1307.1493
114
9.49
References 
Authors
18
3
Search Limit
100114
Name
Order
Citations
PageRank
Stefan Wager115616.00
Sida Wang254144.65
Percy Liang33416172.27