Title
All by Myself: Learning individualized competitive behavior with a contrastive reinforcement learning optimization
Abstract
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries’ goals at the same time. Besides dealing with the increased dynamics of the scenarios due to the opponents’ actions, they usually have to understand how to overcome the opponent’s strategies. Most of the common solutions, usually based on continual learning or centralized multi-agent experiences, however, do not allow the development of personalized strategies to face individual opponents. In this paper, we propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them. The entire model is trained online, using a composed loss based on a contrastive optimization, to learn competitive and multiplayer games. We evaluate our model on a pokemon duel scenario and the four-player competitive Chef’s Hat card game. Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times. We also present a discussion on the impact of our model, in particular on how well it deals with on specific strategy learning for each of the two scenarios.
Year
DOI
Venue
2022
10.1016/j.neunet.2022.03.013
Neural Networks
Keywords
DocType
Volume
Reinforcement learning,Contrastive learning,Competitive learning
Journal
150
Issue
ISSN
Citations 
1
0893-6080
0
PageRank 
References 
Authors
0.34
1
2
Name
Order
Citations
PageRank
Pablo V. A. Barros111922.02
Alessandra Sciutti26220.57