Title
Graph Convolutional Reinforcement Learning for Multi-Agent Cooperation.
Abstract
Learning to cooperate is crucially important in multi-agent reinforcement learning. The key is to take the influence of other agents into consideration when performing distributed decision making. However, multi-agent environment is highly dynamic, which makes it hard to learn abstract representations of influences between agents by only low-order features that existing methods exploit. In this paper, we propose a graph convolutional model for multi-agent cooperation. The graph convolution architecture adapts to the dynamics of the underlying graph of the multi-agent environment, where the influence among agents is captured by their abstract relation representations. High-order features extracted by relation kernels of convolutional layers from gradually increased receptive fields are exploited to learn cooperative strategies. The gradient of an agent not only backpropagates to itself but also to other agents in its receptive fields to reinforce the learned cooperative strategies. Moreover, the relation representations are temporally regularized to make the cooperation more consistent. Empirically, we show that our model enables agents to develop more cooperative and sophisticated strategies than existing methods in jungle and battle games and routing in packet switching networks.
Year
Venue
Field
2018
arXiv: Learning
Graph,Architecture,Convolution,Exploit,Distributed decision,Artificial intelligence,Packet switching,Machine learning,Mathematics,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1810.09202
5
PageRank 
References 
Authors
0.40
22
3
Name
Order
Citations
PageRank
Jiechuan Jiang150.40
Chen Dun251.08
Zongqing Lu320926.18