Title
Multi-Agent Concentrative Coordination with Decentralized Task Representation.
Abstract
Value-based multi-agent reinforcement learning (MARL) methods hold the promise of promoting coordination in cooperative settings. Popular MARL methods mainly focus on the scalability or the representational capacity of value functions. Such a learning paradigm can reduce agents' uncertainties and promote coordination. However, they fail to leverage the task structure decomposability, which generally exists in real-world multi-agent systems (MASs), leading to a significant amount of time exploring the optimal policy in complex scenarios. To address this limitation, we propose a novel framework Multi-Agent Concentrative Coordination (MACC) based on task decomposition, with which an agent can implicitly form local groups to reduce the learning space to facilitate coordination. In MACC, agents first learn representations for subtasks from their local information and then implement an attention mechanism to concentrate on the most relevant ones. Thus, agents can pay targeted attention to specific subtasks and improve coordination. Extensive experiments on various complex multi-agent benchmarks demonstrate that MACC achieves remarkable performance compared to existing methods.
Year
DOI
Venue
2022
10.24963/ijcai.2022/85
European Conference on Artificial Intelligence
Keywords
DocType
Citations 
Agent-based and Multi-agent Systems: Coordination and Cooperation,Agent-based and Multi-agent Systems: Agreement Technologies: Argumentation,Agent-based and Multi-agent Systems: Agreement Technologies: Negotiation and Contract-Based Systems,Agent-based and Multi-agent Systems: Mechanism Design,Agent-based and Multi-agent Systems: Multi-agent Learning
Conference
0
PageRank 
References 
Authors
0.34
0
9
Name
Order
Citations
PageRank
Lei Yuan100.68
Chenghe Wang200.68
Jianhao Wang301.01
Fuxiang Zhang400.34
Feng Chen500.34
Cong Guan601.01
Zongzhang Zhang700.68
Chongjie Zhang815423.80
Yang Yu948455.96