Title | ||
---|---|---|
Learning Distinct Strategies for Heterogeneous Cooperative Multi-agent Reinforcement Learning |
Abstract | ||
---|---|---|
Value decomposition has been a promising paradigm for cooperative multi-agent reinforcement learning. Many different approaches have been proposed, but few of them consider the heterogeneous settings. Agents with tremendously different behaviours bring great challenges for centralized training with decentralized execution. In this paper, we provide a formulation for the heterogeneous multi-agent reinforcement learning with some theoretical analysis. On top of that, we propose an efficient two-stage heterogeneous learning method. The first stage refers to a transfer technique by tuning existed homogeneous models to heterogeneous ones, which can accelerate the convergent speed. In the second stage, an iterative learning with centralized training is designed to improve the overall performance. We make experiments on heterogeneous unit micromanagement tasks in StarCraft II. The results show that our method could improve the win rate by around 20% for the most difficult scenario, compared with state-of-the-art methods, i.e., QMIX and Weighted QMIX. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-86380-7_44 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV |
Keywords | DocType | Volume |
Multi-agent reinforcement learning, Heterogeneity, Transfer learning. | Conference | 12894 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
0 | 3 |