Title
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Abstract
Training large deep neural networks on massive datasets is  computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by  employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance.  By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1).
Year
Venue
DocType
2020
ICLR
Conference
Citations 
PageRank 
References 
1
0.35
24
Authors
10
Name
Order
Citations
PageRank
Yang You122514.01
Jing Li210.35
Sashank Jakkam Reddi332422.57
Jonathan Hseu4211.81
Sanjiv Kumar52182153.05
Srinadh Bhojanapalli629515.21
Xiaodan Song773354.42
James Demmel8110.98
Kurt Keutzer9184.86
Cho-Jui Hsieh105034291.05