Title
Reinforcement Learning with Multiple Shared Rewards.
Abstract
A major concern in multi-agent coordination is how to select algorithms that can lead agents to learn together to achieve certain goals. Much of the research on multi-agent learning relates to reinforcement learning (RL) techniques. One element of RL is the interaction model, which describes how agents should interact with each other and with the environment. Discrete, continuous and objective-oriented interaction models can improve convergence among agents. This paper proposes an approach based on the integration of multi-agent coordination models designed for reward-sharing policies. By taking the best features from each model, better agent coordination is achieved. Our experimental results show that this approach improves convergence among agents even in large state-spaces and yields better results than classical RL approaches.
Year
DOI
Venue
2016
10.1016/j.procs.2016.05.376
Procedia Computer Science
Keywords
Field
DocType
Adaptive Agents,Shared Rewards,Interaction,Learning,Coordination.
Convergence (routing),Computer science,Interaction model,Artificial intelligence,Adaptive agents,Error-driven learning,Machine learning,Reinforcement learning
Conference
Volume
ISSN
Citations 
80
1877-0509
1
PageRank 
References 
Authors
0.40
11
5
Name
Order
Citations
PageRank
Douglas M. Guisi111.41
Richardson Ribeiro24311.12
Marcelo Teixeira3198.07
André Pinz Borges4147.67
Fabrício Enembreck527438.42