Title | ||
---|---|---|
A New Framework for Multi-Agent Reinforcement Learning -- Centralized Training and Exploration with Decentralized Execution via Policy Distillation |
Abstract | ||
---|---|---|
Multi-agent deep reinforcement learning demands for highly coordinated environment exploration among all the participating agents. Previous research attempted to address this challenge through learning centralized value functions. However, the common strategy for every agent to learn their local policies directly may fail to nurture inter-agent collaboration and can be sample inefficient whenever agents alter their communication channels. To address these issues, we propose a new framework known as centralized training and exploration with decentralized execution via policy distillation. Guided by this framework, we will first train agents' policies with shared global component to foster coordinated and effective learning. Locally executable policies will be derived subsequently from the trained global policies via policy distillation.
|
Year | DOI | Venue |
---|---|---|
2020 | 10.5555/3398761.3398987 | AAMAS '19: International Conference on Autonomous Agents and Multiagent Systems
Auckland
New Zealand
May, 2020 |
DocType | ISBN | Citations |
Conference | 978-1-4503-7518-4 | 0 |
PageRank | References | Authors |
0.34 | 0 | 1 |