Title
Finite-Sample Analyses for Fully Decentralized Multi-Agent Reinforcement Learning.
Abstract
Despite the increasing interest in multi-agent reinforcement learning (MARL) in the community, understanding its theoretical foundation has long been recognized as a challenging problem. In this work, we make an attempt towards addressing this problem, by providing finite-sample analyses for fully decentralized MARL. Specifically, we consider two fully decentralized MARL settings, where teams of agents are connected by time-varying communication networks, and either collaborate or compete in a zero-sum game, without any central controller. These settings cover many conventional MARL settings in the literature. For both settings, we develop batch MARL algorithms that can be implemented in a fully decentralized fashion, and quantify the finite-sample errors of the estimated action-value functions. Our error analyses characterize how the function class, the number of samples within each iteration, and the number of iterations determine the statistical accuracy of the proposed algorithms. Our results, compared to the finite-sample bounds for single-agent RL, identify the involvement of additional error terms caused by decentralized computation, which is inherent in our decentralized MARL setting. To our knowledge, our work appears to be the first finite-sample analyses for MARL, which sheds light on understanding both the sample and computational efficiency of MARL algorithms.
Year
Venue
DocType
2018
arXiv: Learning
Journal
Volume
Citations 
PageRank 
abs/1812.02783
1
0.34
References 
Authors
41
5
Name
Order
Citations
PageRank
Kaiqing Zhang14813.02
zhuoran yang25229.86
Han Liu343442.70
Zhang, Tong47126611.43
Tamer Basar53497402.11