Title
Distributed convergence to Nash equilibria in two-network zero-sum games
Abstract
This paper considers a class of strategic scenarios in which two networks of agents have opposing objectives with regard to the optimization of a common objective function. In the resulting zero-sum game, individual agents collaborate with neighbors in their respective network and have only partial knowledge of the state of the agents in the other network. For the case when the interaction topology of each network is undirected, we synthesize a distributed saddle-point strategy and establish its convergence to the Nash equilibrium for the class of strictly concave-convex and locally Lipschitz objective functions. We also show that this dynamics does not converge in general if the topologies are directed. This justifies the introduction, in the directed case, of a generalization of this distributed dynamics which we show converges to the Nash equilibrium for the class of strictly concave-convex differentiable functions with globally Lipschitz gradients. The technical approach combines tools from algebraic graph theory, nonsmooth analysis, set-valued dynamical systems, and game theory.
Year
DOI
Venue
2013
10.1016/j.automatica.2013.02.062
Automatica
Keywords
Field
DocType
Adversarial networks,Distributed algorithms,Zero-sum game,Saddle-point dynamics,Nash equilibria
Correlated equilibrium,Mathematical optimization,Mathematical economics,Epsilon-equilibrium,Control theory,Best response,Equilibrium selection,Solution concept,Normal-form game,Nash equilibrium,Folk theorem,Mathematics
Journal
Volume
Issue
ISSN
49
6
0005-1098
Citations 
PageRank 
References 
31
0.98
14
Authors
2
Name
Order
Citations
PageRank
B. Gharesifard1310.98
Jorge Cortes21452128.75