Abstract | ||
---|---|---|
Computer generated forces (CGFs) inhabiting air combat training simulations must show realistic and adaptive behavior to effectively perform their roles as allies and adversaries. In earlier work, behavior for these CGFs was successfully generated using reinforcement learning. However, due to missile hits being subject to chance (a.k.a. the probability-of-kill), the CGFs have in certain cases been improperly rewarded and punished. We surmise that taking this probability-of-kill into account in the reward function will improve performance. To remedy the false rewards and punishments, a new reward function is proposed that rewards agents based on the expected outcome of their actions. Tests show that the use of this function significantly increases the performance of the CGFs in various scenarios, compared to the previous reward function and a naive baseline. Based on the results, the new reward function allows the CGFs to generate more intelligent behavior, which enables better training simulations. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/SMC.2015.248 | 2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS |
Keywords | Field | DocType |
reinforcement learning, rewards, air combat, training simulations, computer generated forces | Radar,Computer science,Missile,Computer generated forces,Artificial intelligence,Adaptive behavior,Machine learning,Air combat,Reinforcement learning | Conference |
ISSN | Citations | PageRank |
1062-922X | 1 | 0.37 |
References | Authors | |
10 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Armon Toubman | 1 | 9 | 2.71 |
Jan Joris Roessingh | 2 | 14 | 4.28 |
Pieter Spronck | 3 | 475 | 51.04 |
Aske Plaat | 4 | 524 | 72.18 |
H. Jaap van den Herik | 5 | 861 | 137.51 |