Title
On the Q-linear convergence of Distributed Generalized ADMM under non-strongly convex function components
Abstract
Solving optimization problems in multi-agent networks where each agent only has partial knowledge of the problem has become an increasingly important problem. In this paper we consider the problem of minimizing the sum of $n$ convex functions. We assume that each function is only known by one agent. We show that Generalized Distributed ADMM converges Q-linearly to the solution of the mentioned optimization problem if the overall objective function is strongly convex but the functions known by each agent are allowed to be only convex. Establishing Q-linear convergence allows for tracking statements that can not be made if only R-linear convergence is guaranteed. In other words, in scenarios in which the objective functions are time-varying at the same scale as the algorithm is updated R-linear convergence is typically insufficient. Further, we establish the equivalence between Generalized Distributed ADMM and P-EXTRA for a sub-set of mixing matrices. This equivalence yields insights in the convergence of P-EXTRA when overshooting to accelerate convergence.
Year
DOI
Venue
2019
10.1109/TSIPN.2019.2892055
ieee transactions on signal and information processing over networks
Keywords
Field
DocType
Convergence,Optimization,Convex functions,Information processing,Linear programming,Europe,Knowledge engineering
Convergence (routing),Mathematical optimization,Matrix (mathematics),Regular polygon,Equivalence (measure theory),Convex function,Rate of convergence,Optimization problem,Mathematics
Journal
Volume
Issue
ISSN
5
3
2373-776X
Citations 
PageRank 
References 
4
0.37
13
Authors
2
Name
Order
Citations
PageRank
Marie Maros1111.82
Joakim Jalden224321.59