Title
Learning in Games with Lossy Feedback.
Abstract
We consider a game-theoretical multi-agent learning problem where the feedback information can be lost during the learning process and rewards are given by a broad class of games known as variationally stable games. We propose a simple variant of the classical online gradient descent algorithm, called reweighted online gradient descent (ROGD) and show that in variationally stable games, if each agent adopts ROGD, then almost sure convergence to the set of Nash equilibria is guaranteed, even when the feedback loss is asynchronous and arbitrarily corrrelated among agents. We then extend the framework to deal with unknown feedback loss probabilities by using an estimator (constructed from past data) in its replacement. Finally, we further extend the framework to accomodate both asynchronous loss and stochastic rewards and establish that multi-agent ROGD learning still converges to the set of Nash equilibria in such settings. Together, these results contribute to the broad lanscape of multi-agent online learning by significantly relaxing the feedback information that is required to achieve desirable outcomes.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
online learning,learning process,simple variant,nash equilibria,almost sure convergence
Field
DocType
Volume
Online learning,Asynchronous communication,Convergence of random variables,Mathematical optimization,Gradient descent,Lossy compression,Simple variant,Computer science,Nash equilibrium,Estimator
Conference
31
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Zhengyuan Zhou114119.63
Panayotis Mertikopoulos225843.71
Susan Athey3234.67
Nicholas Bambos4888145.45
Peter W. Glynn51527293.76
Yinyu Ye65201497.09