Title
Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent.
Abstract
Nash equilibrium is a central concept in game theory. Several Nash solvers exist, yet none scale to normal-form games with many actions and many players, especially those with payoff tensors too big to be stored in memory. In this work, we propose an approach that iteratively improves an approximation to a Nash equilibrium through joint play. It accomplishes this by tracing a previously established homotopy which connects instances of the game defined with decaying levels of entropy regularization. To encourage iterates to remain near this path, we efficiently minimize \emph{average deviation incentive} via stochastic gradient descent, intelligently sampling entries in the payoff tensor as needed. This process can also be viewed as constructing and reacting to a polymatrix approximation to the game. In these ways, our proposed approach, \emph{average deviation incentive descent with adaptive sampling} (ADIDAS), is most similar to three classical approaches, namely homotopy-type, Lyapunov, and iterative polymatrix solvers. We demonstrate through experiments the ability of this approach to approximate a Nash equilibrium in normal-form games with as many as seven players and twenty one actions (over one trillion outcomes) that are orders of magnitude larger than those possible with prior algorithms.
Year
Venue
DocType
2022
International Joint Conference on Autonomous Agents and Multi-agent Systems
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
9
Name
Order
Citations
PageRank
Ian M. Gemp1166.37
Rahul Savani224330.09
Marc Lanctot301.35
Yoram Bachrach4126279.07
Thomas Anthony501.69
Richard Everett601.01
Andrea Tacchetti71389.57
Tom Eccles8175.77
János Kramár900.68