Title | ||
---|---|---|
Network defense decision-making based on a stochastic game system and a deep recurrent Q-network |
Abstract | ||
---|---|---|
Defense decision-making in cybersecurity has increasingly relied upon stochastic game processes that combine game theory with a Markov decision process (MDP). However, the MDP presumes that both attackers and defenders are perfectly rational and have complete information, which greatly limits the scope of application and guidance value of MDP to the defense decision-making process. The present study addresses this issue by applying a partially observable MDP to analyze attack-defense behaviors, and a deep Qnetwork (DQN) algorithm based on a recurrent neural network for solving game equilibria dynamically and intelligently under conditions of partial rationality and incomplete information. The proposed DQN method enables network defense strategies to leverage online learning to gradually approach an optimal defense strategy. The rationality and convergence of the proposed approach are demonstrated by conducting simulations and comparative analyses of both the attacking and defending parties engaged in distributed reflection denial of service attacks. (c) 2021 Elsevier Ltd. All rights reserved. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1016/j.cose.2021.102480 | COMPUTERS & SECURITY |
Keywords | DocType | Volume |
Defense decision-making, Stochastic game, Partially observable Markov decision, process, Deep recurrent Q-network, Distributed reflection denial of, service attacks | Journal | 111 |
ISSN | Citations | PageRank |
0167-4048 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiao-hu Liu | 1 | 0 | 2.03 |
Heng-wei Zhang | 2 | 4 | 7.23 |
Shuqin Dong | 3 | 0 | 0.34 |
Yuchen Zhang | 4 | 8 | 6.51 |