Title
An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning.
Abstract
Transfer Learning has shown great potential to enhance single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents can share knowledge with each other. However, it remains a problem of how an agent should learn from other agents. In this paper, we propose a novel Multiagent Policy Transfer Framework (MAPTF) to improve MARL efficiency. MAPTF learns which agent's policy is the best to reuse for each agent and when to terminate it by modeling multiagent policy transfer as the option learning problem. Furthermore, in practice, the option module can only collect all agent's local experiences for update due to the partial observability of the environment. While in this setting, each agent's experience may be inconsistent with each other, which may cause the inaccuracy and oscillation of the option-value's estimation. Therefore, we propose a novel option learning algorithm, the successor representation option learning to solve it by decoupling the environment dynamics from rewards and learning the option-value under each agent's preference. MAPTF can be easily combined with existing deep RL and MARL approaches, and experimental results show it significantly boosts the performance of existing methods in both discrete and continuous state spaces.
Year
Venue
DocType
2021
Annual Conference on Neural Information Processing Systems
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
12
Name
Order
Citations
PageRank
Tianpei Yang1136.43
Weixun Wang215.75
Hongyao Tang324.45
Jianye Hao418955.78
Zhaopeng Meng57915.68
Hangyu Mao695.24
Dong Li700.68
Wulong Liu8128.36
yingfeng chen96913.64
Yujing Hu1024.44
Changjie Fan115721.37
Chengwei Zhang1211.03