Title
Emergent Coordination Through Competition.
Abstract
We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agentsu0027 behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.
Year
Venue
Field
2019
ICLR
Computer science,Human–computer interaction,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1902.07151
4
PageRank 
References 
Authors
0.37
34
6
Name
Order
Citations
PageRank
Siqi Liu1554.94
Guy Lever21087.07
Josh S. Merel314311.34
Saran Tunyasuvunakool4102.14
Nicolas Heess5176294.77
Thore Graepel64211242.71