Abstract | ||
---|---|---|
We consider a multi-agent setting with agents exchanging information over a possibly time-varying network, aiming at minimising a separable objective function subject to constraints. To achieve this objective we propose a novel subgradient averaging algorithm that allows for non-differentiable objective functions and different constraint sets per agent. Allowing different constraints per agent simultaneously with a time-varying communication network constitutes a distinctive feature of our approach, extending existing results on distributed subgradient methods. To highlight the necessity of dealing with a different constraint set within a distributed optimisation context, we analyse a problem instance where an existing algorithm does not exhibit a convergent behaviour if adapted to account for different constraint sets. For our proposed iterative scheme we show asymptotic convergence of the iterates to a minimum of the underlying optimisation problem for step sizes of the form ηk+1, η>0. We also analyse this scheme under a step size choice of ηk+1, η>0, and establish a convergence rate of O(lnkk) in objective value. To demonstrate the efficacy of the proposed method, we investigate a robust regression problem and an ℓ2 regression problem with regularisation. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1016/j.automatica.2021.109738 | Automatica |
Keywords | DocType | Volume |
Distributed optimisation,Multi-agent networks,Parallel algorithms,Subgradient methods,Consensus | Journal | 131 |
Issue | ISSN | Citations |
1 | 0005-1098 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Romao Licio | 1 | 0 | 0.34 |
Kostas Margellos | 2 | 168 | 24.95 |
Giuseppe Notarstefano | 3 | 470 | 42.83 |
Antonis Papachristodoulou | 4 | 990 | 90.01 |