Abstract | ||
---|---|---|
Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria. |
Year | Venue | Field |
---|---|---|
2019 | UAI | Convergence (routing),Mathematical optimization,Computer science |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Benjamin Chasnov | 1 | 0 | 2.03 |
Lillian J. Ratliff | 2 | 87 | 23.32 |
Eric Mazumdar | 3 | 13 | 7.50 |
Samuel Burden | 4 | 90 | 11.04 |