Title
Distributed Learning over Networks under Subspace Constraints
Abstract
This work presents and studies a distributed algorithm for solving optimization problems over networks where agents have individual costs to minimize subject to subspace constraints that require the minimizers across the network to lie in a low-dimensional subspace. The algorithm consists of two steps: i) a self-learning step where each agent minimizes its own cost using a stochastic gradient update; ii) and a social-learning step where each agent combines the updated estimates from its neighbors using the entries of a combination matrix that converges in the limit to the projection onto the low-dimensional subspace. We obtain analytical formulas that reveal how the step-size, data statistical properties, gradient noise, and subspace constraints influence the network mean-square-error performance. The results also show that in the small step-size regime, the iterates generated by the distributed algorithm achieve the centralized steady-state MSE performance. We provide simulations to illustrate the theoretical findings.
Year
DOI
Venue
2019
10.1109/IEEECONF44664.2019.9049074
2019 53rd Asilomar Conference on Signals, Systems, and Computers
Keywords
DocType
ISSN
low-dimensional subspace,subspace constraints,network mean-square-error performance,step-size regime,distributed algorithm,optimization problems,minimizers,stochastic gradient update,distributed learning,self-learning,social-learning,data statistical properties,gradient noise,steady-state MSE performance
Conference
1058-6393
ISBN
Citations 
PageRank 
978-1-7281-4301-9
0
0.34
References 
Authors
22
3
Name
Order
Citations
PageRank
Roula Nassif1576.89
Stefan Vlaski22311.39
Ali H. Sayed39134667.71