Title
A Distributed Second-Order Algorithm You Can Trust.
Abstract
Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years. While first-order methods seem to dominate the field, second-order methods are nevertheless attractive as they potentially require fewer communication rounds to converge. However, there are significant drawbacks that impede their wide adoption, such as the computation and the communication of a large Hessian matrix. In this paper we present a new algorithm for distributed training of generalized linear models that only requires the computation of diagonal blocks of the Hessian matrix on the individual workers. To deal with this approximate information we propose an adaptive approach that - akin to trust-region methods - dynamically adapts the auxiliary model to compensate for modeling errors. We provide theoretical rates of convergence for a wide class of problems including L1-regularized objectives. We also demonstrate that our approach achieves state-of-the-art results on multiple large benchmark datasets.
Year
Venue
DocType
2018
ICML
Journal
Volume
Citations 
PageRank 
abs/1806.07569
1
0.35
References 
Authors
11
6
Name
Order
Citations
PageRank
Celestine Dunner1126.99
Aurelien Lucchi2241989.45
Matilde Gargiani311.03
Yatao Bian4147.67
Thomas Hofmann5100641001.83
Martin Jaggi685254.16