Title
Large batch size training of neural networks with adversarial training and second-order information.
Abstract
The most straightforward method to accelerate Stochastic Gradient Descent (SGD) is to distribute the randomly selected batch of inputs over multiple processors. To keep the distributed processors fully utilized requires commensurately growing the batch size; however, large batch training usually leads to poor generalization. Existing solutions for large batch training either significantly degrade accuracy or require massive hyper-parameter tuning. To address this issue, we propose a novel large batch training method which combines recent results in adversarial training and second order information. We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as smaller networks such as SqueezeNext. Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1% and $5times$, respectively). We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our proposed method in any of these experiments. With slight hyper-parameter tuning, our method can reduce the number of SGD iterations of ResNet18 on Cifar-10/ImageNet to $44.8times$ and $28.8times$, respectively. We have open sourced the method including tools for computing Hessian spectrum.
Year
Venue
Field
2018
arXiv: Learning
Residual,Stochastic gradient descent,Mathematical optimization,Hessian matrix,Artificial neural network,Mathematics
DocType
Volume
Citations 
Journal
abs/1810.01021
1
PageRank 
References 
Authors
0.38
32
4
Name
Order
Citations
PageRank
Zhewei Yao13110.58
Amir Gholami26612.99
Kurt Keutzer35040801.67
Michael W. Mahoney43297218.10