Title
Stochastic mirror descent method for distributed multi-agent optimization.
Abstract
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.
Year
DOI
Venue
2018
10.1007/s11590-016-1071-z
Optimization Letters
Keywords
Field
DocType
Distributed algorithm, Multi-agent network, Mirror descent, Stochastic approximation, Convex optimization
Mathematical optimization,Stochastic gradient descent,Subgradient method,Distributed algorithm,Bregman divergence,Rate of convergence,Convex optimization,Optimization problem,Stochastic approximation,Mathematics
Journal
Volume
Issue
ISSN
12
6
1862-4480
Citations 
PageRank 
References 
6
0.44
23
Authors
4
Name
Order
Citations
PageRank
Jueyou Li1261.49
Guoquan Li260.44
Zhi-You Wu3184.35
Changzhi Wu419519.07