Title
Cooperative Online Learning: Keeping your Neighbors Updated.
Abstract
We study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. The loss function is then revealed to these agents and also to their neighbors in the network. When activations are stochastic, we show that the regret achieved by $N$ agents running the standard online Mirror Descent is $O(sqrt{alpha T})$, where $T$ is the horizon and $alpha le N$ is the independence number of the network. This is in contrast to the regret $Omega(sqrt{N T})$ which $N$ agents incur in the same setting when feedback is not shared. We also show a matching lower bound of order $sqrt{alpha T}$ that holds for any given network. When the pattern of agent activations is arbitrary, the problem changes significantly: we prove a $Omega(T)$ lower bound on the regret that holds for any online algorithm oblivious to the feedback source.
Year
Venue
DocType
2019
arXiv: Learning
Journal
Volume
Citations 
PageRank 
abs/1901.08082
0
0.34
References 
Authors
11
3
Name
Order
Citations
PageRank
Nicolò Cesa-Bianchi14609590.83
Tommaso R. Cesari200.68
Claire Monteleoni332724.15