Title
Modeling the Formation of Social Conventions in Multi-Agent Populations.
Abstract
In order to understand the formation of social conventions we need to know the specific role of control and learning in multi-agent systems. To advance in this direction, we propose, within the framework of the Distributed Adaptive Control (DAC) theory, a novel Control-based Reinforcement Learning architecture (CRL) that can account for the acquisition of social conventions in multi-agent populations that are solving a benchmark social decision-making problem. Our new CRL architecture, as a concrete realization of DAC multi-agent theory, implements a low-level sensorimotor control loop handling the agentu0027s reactive behaviors (pre-wired reflexes), along with a layer based on model-free reinforcement learning that maximizes long-term reward. We apply CRL in a multi-agent game-theoretic task in which coordination must be achieved in order to find an optimal solution. We show that our CRL architecture is able to both find optimal solutions in discrete and continuous time and reproduce human experimental data on standard game-theoretic metrics such as efficiency in acquiring rewards, fairness in reward distribution and stability of convention formation.
Year
Venue
Field
2018
arXiv: Multiagent Systems
Sensorimotor control,Architecture,Simulation,Convention,Computer science,Need to know,Artificial intelligence,Adaptive control,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1802.06108
0
PageRank 
References 
Authors
0.34
15
5