Title
Lenient Multi-Agent Deep Reinforcement Learning
Abstract
Much of the success of single agent deep reinforcement learning (DRL) in recent years can be attributed to the use of experience replay memories (ERM), which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling stored state transitions. However, care is required when using ERMs for multi-agent deep reinforcement learning (MA-DRL), as stored transitions can become outdated when agents update their policies in parallel [9]. In this work we apply leniency [22] to MA-DRL. Lenient agents map state-action pairs to decaying temperature values that control the amount of leniency applied towards negative policy updates that are sampled from the ERM. This introduces optimism in the value function update, and has been shown to facilitate cooperation in tabular fully-cooperative multi-agent reinforcement learning problems. We evaluate our Lenient-DQN (LDQN) empirically against the related Hysteretic-DQN (HDQN) algorithm [20] as well as a modified version we call scheduled-HDQN, that uses average reward learning near terminal states. Evaluations take place in extended variations of the Coordinated Multi-Agent Object Transportation Problem (CMOTP) [6]. We find that LDQN agents are more likely to converge to the optimal policy in a stochastic reward CMOTP compared to standard and scheduled-HDQN agents.
Year
DOI
Venue
2018
10.5555/3237383.3237451
PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18)
Keywords
DocType
Volume
Multi-Agent Deep Reinforcement Learning, Leniency
Conference
abs/1707.04402
Citations 
PageRank 
References 
6
0.42
18
Authors
4
Name
Order
Citations
PageRank
Gregory Palmer161.10
Karl Tuyls21272127.83
Daan Bloembergen38711.55
Rahul Savani424330.09