Title
Modular Architecture for StarCraft II with Deep Reinforcement Learning.
Abstract
We present a novel modular architecture for StarCraft II AI. The architecture splits responsibilities between multiple modules that each control one aspect of the game, such as build-order selection or tactics. A centralized scheduler reviews macros suggested by all modules and decides their order of execution. An updater keeps track of environment changes and instantiates macros into series of executable actions. Modules in this framework can be optimized independently or jointly via human design, planning, or reinforcement learning. We apply deep reinforcement learning techniques to training two out of six modules of a modular agent with self-play, achieving 94% or 87% win rates against the Harder (level 5) built-in Blizzard bot in Zerg vs. Zerg matches, with or without fog-of-war.
Year
Venue
DocType
2018
AIIDE
Journal
Volume
Citations 
PageRank 
abs/1811.03555
2
0.64
References 
Authors
16
6
Name
Order
Citations
PageRank
Dennis Lee1221.63
Haoran Tang2653.03
Jeffrey O. Zhang320.64
huazhe xu4577.43
Trevor Darrell5224131800.67
Pieter Abbeel66363376.48