Title
Improving neural networks by preventing co-adaptation of feature detectors
Abstract
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.
Year
Venue
Field
2012
CoRR
Training set,Feedforward neural network,Feature detection,Computer science,Feature (computer vision),Artificial intelligence,Test data,Overfitting,Artificial neural network,Machine learning,Cognitive neuroscience of visual object recognition
DocType
Volume
Citations 
Journal
abs/1207.0580
1298
PageRank 
References 
Authors
152.45
9
5
Search Limit
1001000
Name
Order
Citations
PageRank
geoffrey e hinton1404354751.69
Nitish Srivastava25645318.34
Alex Krizhevsky313175588.91
Ilya Sutskever4258141120.24
Ruslan Salakhutdinov512190764.15