Title
Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks.
Abstract
We consider scenarios from the real-time strategy game StarCraft as new benchmarks for reinforcement learning algorithms. We propose micromanagement tasks, which present the problem of the short-term, low-level control of army members during a battle. From a reinforcement learning point of view, these scenarios are challenging because the state-action space is very large, and because there is no obvious feature representation for the state-action evaluation function. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. In addition, we present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, {epsilon}-greedy exploration. Experiments show that with this algorithm, we successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
Year
Venue
Field
2016
arXiv: Artificial Intelligence
Heuristic,Computer science,Evaluation function,Artificial intelligence,Micromanagement,Backpropagation,Reinforcement learning algorithm,Artificial neural network,Game engine,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1609.02993
34
PageRank 
References 
Authors
1.43
6
4
Name
Order
Citations
PageRank
Nicolas Usunier1197497.52
Gabriel Synnaeve224016.91
Zeming Lin3636.04
Soumith Chintala42056102.09