Title | ||
---|---|---|
Deep Deterministic Policy Gradients with Transfer Learning Framework in StarCraft Micromanagement |
Abstract | ||
---|---|---|
This paper proposes an intelligent multi-agent approach in a real-time strategy game, StarCraft, based on the deep deterministic policy gradients (DDPG) techniques. An actor and a critic network are established to estimate the optimal control actions and corresponding value functions, respectively. A special reward function is designed based on the agents’ own condition and enemies’ information to help agents make intelligent control in the game. Furthermore, in order to accelerate the learning process, the transfer learning techniques are integrated into the training process. Specifically, the agents are trained initially in a simple task to learn the basic concept for the combat, such as detouring moving, avoiding and joining attacking. Then, we transfer this experience to the target task with a complex and difficult scenario. From the experiment, it is shown that our proposed algorithm with transfer learning can achieve better performance. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/EIT.2019.8833742 | 2019 IEEE International Conference on Electro Information Technology (EIT) |
Keywords | Field | DocType |
multi-agent,deep deterministic policy gradients,strategy game,intelligent control,transfer learning | Intelligent control,Optimal control,Computer science,Transfer of learning,Computer network,Artificial intelligence,Micromanagement | Conference |
ISSN | ISBN | Citations |
2154-0357 | 978-1-7281-0928-2 | 0 |
PageRank | References | Authors |
0.34 | 12 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dong Xie | 1 | 0 | 0.34 |
Xiangnan Zhong | 2 | 346 | 16.35 |