Title
Sketch-Based Empirical Natural Gradient Methods for Deep Learning
Abstract
In this paper, we develop an efficient sketch-based empirical natural gradient method (SENG) for large-scale deep learning problems. The empirical Fisher information matrix is usually low-rank since the sampling is only practical on a small amount of data at each iteration. Although the corresponding natural gradient direction lies in a small subspace, both the computational cost and memory requirement are still not tractable due to the high dimensionality. We design randomized techniques for different neural network structures to resolve these challenges. For layers with a reasonable dimension, sketching can be performed on a regularized least squares subproblem. Otherwise, since the gradient is a vectorization of the product between two matrices, we apply sketching on the low-rank approximations of these matrices to compute the most expensive parts. A distributed version of SENG is also developed for extremely large-scale applications. Global convergence to stationary points is established under mild assumptions and a fast linear convergence is analyzed under the neural tangent kernel (NTK) case. Extensive experiments on convolutional neural networks show the competitiveness of SENG compared with the state-of-the-art methods. On the task ResNet50 with ImageNet-1k, SENG achieves 75.9% Top-1 testing accuracy within 41 epochs. Experiments on the distributed large-batch training Resnet50 with ImageNet-1k show that the scaling efficiency is quite reasonable.
Year
DOI
Venue
2022
10.1007/s10915-022-01911-x
Journal of Scientific Computing
Keywords
DocType
Volume
Deep learning, Natural gradient methods, Sketch-based methods, Convergence, 90C06, 90C26
Journal
92
Issue
ISSN
Citations 
3
0885-7474
0
PageRank 
References 
Authors
0.34
3
5
Name
Order
Citations
PageRank
Minghan Yang100.68
Dong Xu27616291.96
Zaiwen Wen393440.20
Mengyun Chen400.34
Pengxiang Xu500.34