Abstract | ||
---|---|---|
Recently, self-normalizing neural networks have been proposed with a scaled version of exponential linear units (SELUs), which can force neuron activations automatically converge towards zero mean and unit variance without use of batch normalization. As the negative part of SELUs is an exponential function, it is computationally intensive. In this paper, we introduce self-normalizing piecewise linear units (SPeLUs) for fast approximation of SELUs, adopting piecewise linear functions instead of the exponential part. Various possible shapes are discussed for piecewise linear units with stable self-normalizing properties. Experiments show that SPeLUs can provide an efficient and fast alternative to SELUs, with almost similar classification performance over MNIST, CIFAR-10 and CIFAR-10 datasets. With SPeLUs, we also show that batch normalization can be simply neglected for constructing deep neural nets, which could be advantageous for fast implementation of deep neural networks. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ICPR.2018.8546221 | 2018 24th International Conference on Pattern Recognition (ICPR) |
Keywords | Field | DocType |
Piecewise linear units,activation functions,self-normalizing property | Zero mean,Random variable,Normalization (statistics),MNIST database,Exponential function,Pattern recognition,Computer science,Algorithm,Artificial intelligence,Artificial neural network,Piecewise linear function,Deep neural networks | Conference |
ISSN | ISBN | Citations |
1051-4651 | 978-1-5386-3789-0 | 0 |
PageRank | References | Authors |
0.34 | 2 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yuanyuan Chang | 1 | 0 | 0.34 |
Xiaofu Wu | 2 | 4 | 2.78 |
Suofei Zhang | 3 | 34 | 7.26 |