Title
Effective learning in the presence of adaptive counterparts
Abstract
Adaptive learning algorithms (ALAs) is an important class of agents that learn the utilities of their strategies jointly with the maintenance of the beliefs about their counterparts' future actions. In this paper, we propose an approach of learning in the presence of adaptive counterparts. Our Q-learning based algorithm, called Adaptive Dynamics Learner (ADL), assigns Q-values to the fixed-length interaction histories. This makes it capable of exploiting the strategy update dynamics of the adaptive learners. By so doing, ADL usually obtains higher utilities than those of equilibrium solutions. We tested our algorithm on a substantial representative set of the most known and demonstrative matrix games. We observed that ADL is highly effective in the presence of such ALAs as Adaptive Play Q-learning, Infinitesimal Gradient Ascent, Policy Hill-Climbing and Fictitious Play Q-learning. Further, in self-play ADL usually converges to a Pareto efficient average utility.
Year
DOI
Venue
2009
10.1016/j.jalgor.2009.04.003
J. Algorithms
Keywords
Field
DocType
multiagent learning matrix games adaptive learning algorithms,demonstrative matrix game,pareto efficient average utility,adaptive counterpart,effective learning,infinitesimal gradient ascent,self-play adl,fictitious play q-learning,policy hill-climbing,adaptive dynamics learner,adaptive learner,adaptive play q-learning,fictitious play,adaptive learning,hill climbing
Gradient descent,Fictitious play,Multiagent learning,Artificial intelligence,Adaptive algorithm,Matrix games,Adaptive learning,Mathematics,Infinitesimal,Pareto principle
Journal
Volume
Issue
ISSN
64
4
0196-6774
Citations 
PageRank 
References 
5
0.50
15
Authors
2
Name
Order
Citations
PageRank
Andriy Burkov1234.03
Chaib-draa, Brahim21190113.23