Title
Learning a Behavioral Repertoire from Demonstrations
Abstract
Imitation Learning (IL) is a machine learning approach to learn a policy from a set of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. Despite the success of systems that use IL and RL, how such systems can adapt in-between game rounds is a neglected area of study but an important aspect of many strategy games. In this paper, we present a new approach called Behavioral Repertoire Imitation Learning (BRIL) that learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human demonstrations for the build-order planning task in StarCraft II. Dimensionality reduction is applied to construct a low-dimensional behavioral space from a high-dimensional description of the army unit composition of each human replay. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, the policy can adapt its behavior - in-between games - to reach a performance beyond that of the traditional IL baseline approach.
Year
DOI
Venue
2020
10.1109/CoG47356.2020.9231897
2020 IEEE Conference on Games (CoG)
Keywords
DocType
ISSN
StarCraft II,imitation learning,build-order planning,online adaptation
Conference
2325-4270
ISBN
Citations 
PageRank 
978-1-7281-4534-1
0
0.34
References 
Authors
11
5
Name
Order
Citations
PageRank
Niels Justesen1324.82
Duque Miguel Gonzalez200.34
Jaramillo Daniel Cabarcas300.34
Jean-Baptiste Mouret4104158.13
Sebastian Risi546054.67