Title
A novel bound on the convergence rate of ADMM for distributed optimization
Abstract
This paper deals with distributed consensus optimization problems in which a connected network of agents aims at minimizing the sum of local cost functions over a common variable. In particular, we focus on the ADMM-based algorithm proposed by Shi et al. in Shi et al. (2014) that, under the assumption that local cost functions are strongly convex with Lipschitz continuous gradients, was shown to converge linearly to the optimal solution. In Shi et al. (2014), an upper bound to the convergence rate, depending on the network topology and on few properties of the local cost functions, has been derived and used to propose an optimal design of the penalty parameter of the algorithm. In this paper, assuming also twice differentiability of the costs function around the optimal solution, we refine the result of Shi et al. (2014) deriving a novel upper bound which is based on the additional knowledge of the curvatures of the cost functions around the optimal solution. We conjecture this bound to be tight in most cases and, to corroborate its effectiveness we perform a numerical analysis over important families of graphs showing that: (i) the bound we introduce improves significantly the bound derived in Shi et al. (2014) and, as a consequence, (ii) the design of the penalty parameter proposed in Shi et al. (2014) might lead to poor performance.
Year
DOI
Venue
2022
10.1016/j.automatica.2022.110403
Automatica
Keywords
DocType
Volume
Multi-agent systems,Static optimization problems,Numerical algorithms,ADMM,Convergence rate
Journal
142
ISSN
Citations 
PageRank 
0005-1098
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Nicola Bastianello113.40
L. Schenato283972.18
Ruggero Carli389469.17