Title
Beyond-Visual-Range Air Combat Tactics Auto-Generation by Reinforcement Learning
Abstract
For quite a long time, effective Beyond-Visual-Range (BVR) air combat tactics can only be discovered by human pilots in the actual combat process. However, due to the lack of actual combat opportunities, making new air combat tactics innovation was generally considered quite difficult. To address this challenge, we first introduced a solely end-to-end Reinforcement Learning (RL) approach for training competitive air combat agents with adversarial self-play from scratch in a high fidelity air combat simulation environment during training. Furthermore, a Key Air Combat Event Reward Shaping (KAERS) mechanism was proposed to provide sparse but objective shaped rewards beyond episodic win/lose signal to accelerate the initial machine learning process. Experimental results showed that multiple valuable air combat tactical behaviors emerged progressively. We hope this study could be extended to the future of air combat machine intelligence research.
Year
DOI
Venue
2020
10.1109/IJCNN48605.2020.9207088
2020 International Joint Conference on Neural Networks (IJCNN)
Keywords
DocType
ISSN
Aircraft,Training,Games,Learning (artificial intelligence),Atmospheric modeling,Markov processes
Conference
2161-4393
ISBN
Citations 
PageRank 
978-1-7281-6926-2
1
0.41
References 
Authors
0
9
Name
Order
Citations
PageRank
Haiyin Piao111.42
Zhixiao Sun241.62
Guanglei Meng331.12
Hechang Chen4189.53
Bohao Qu510.41
Kuijun Lang610.41
Yang Sun7456.65
Shengqi Yang810.75
Xuanqi Peng910.41