Title
Gradient-free method for nonsmooth distributed optimization
Abstract
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov's random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to $$d$$ d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
Year
DOI
Venue
2015
10.1007/s10898-014-0174-2
Journal of Global Optimization
Keywords
Field
DocType
Distributed algorithm,Gaussian smoothing,Gradient-free method,Convex optimization
Convergence (routing),Network size,Mathematical optimization,Subgradient method,Gaussian blur,Distributed algorithm,Rate of convergence,Convex optimization,Optimization problem,Mathematics
Journal
Volume
Issue
ISSN
61
2
0925-5001
Citations 
PageRank 
References 
12
0.59
17
Authors
4
Name
Order
Citations
PageRank
Jueyou Li1261.49
Changzhi Wu219519.07
Zhiyou Wu3355.35
Qiang Long4362.40