Title
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
Abstract
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.
Year
DOI
Venue
2015
10.1109/ICCV.2015.123
ICCV
Field
DocType
Volume
Rectifier,Parametric rectified linear unit,Pattern recognition,Computer science,Network architecture,Artificial intelligence,Overfitting,Initialization,Artificial neural network,Contextual image classification,Machine learning,Vanishing gradient problem
Journal
abs/1502.01852
Issue
ISSN
Citations 
1
1550-5499
1685
PageRank 
References 
Authors
83.25
27
4
Search Limit
1001000
Name
Order
Citations
PageRank
Kaiming He121469696.72
Xiangyu Zhang213044437.66
Shaoqing Ren317051548.00
Jian Sun425842956.90