Title
Cada: Communication-Adaptive Distributed Adam
Abstract
Stochastic gradient descent (SGD) has taken the stage as the primary workhorse for large-scale machine learning. It is often used with its adaptive variants such as AdaGrad, Adam, and AMSGrad. This paper proposes an adaptive stochastic gradient descent method for distributed machine learning, which can be viewed as the communication-adaptive counterpart of the celebrated Adam method - justifying its name CADA. The key components of CADA are a set of new rules tailored for adaptive stochastic gradients that can be implemented to save communication upload. The new algorithms adaptively reuse the stale Adam gradients, thus saving communication, and still have convergence rates comparable to original Adam. In numerical experiments, CADA achieves impressive empirical performance in terms of total communication round reduction.
Year
Venue
DocType
2021
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)
Conference
Volume
ISSN
Citations 
130
2640-3498
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Tianyi Chen1437.52
Ziye Guo200.34
Yuejiao Sun322.09
Yin, Wotao4131.36