Abstract | ||
---|---|---|
Advanced Persistent Threats (APTs) are stealthy, sophisticated, and long-term attacks that impose significant economic costs and violate the security of sensitive information. Data and control flow commands arising from APTs introduce new information flows into the targeted computer system. Dynamic Information Flow Tracking (DIFT) is a promising detection mechanism against APTs that taints suspicious input sources in the system and authenticates the tainted flows at certain processes according to a well defined security policy. Employing DIFT to defend against APTs in large scale cyber systems is restricted due to the heavy resource and performance overhead introduced on the system. The objective of this paper is to model resource efficient DIFT that successfully detect APTs. We develop a game-theoretic framework and provide an analytical model of DIFT that enables the study of trade-off between resource efficiency and the quality of detection in DIFT. Our proposed infinite-horizon, nonzero-sum, stochastic game captures the performance parameters of DIFT such as false alarms and false-negatives and considers an attacker model where the APT can relaunch the attack if it fails in a previous attempt and thereby continuously engage in threatening the system. We assume some of the performance parameters of DIFT are unknown. We propose a model-free reinforcement learning algorithm that converges to a Nash equilibrium of the discounted stochastic game between APT and DIFT. We execute and evaluate the proposed algorithm on a real-world nation state attack dataset. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/978-3-030-32430-8_25 | DECISION AND GAME THEORY FOR SECURITY |
Keywords | Field | DocType |
Security of computer systems, Advance persistent threats, Dynamic Information Flow Tracking, Stochastic games, Reinforcement learning | Information flow (information theory),Resource efficiency,Computer science,Control flow,Security policy,Nash equilibrium,Information sensitivity,Stochastic game,Reinforcement learning,Distributed computing | Conference |
Volume | ISSN | Citations |
11836 | 0302-9743 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dinuka Sahabandu | 1 | 2 | 2.07 |
Shana Moothedath | 2 | 0 | 1.01 |
Joey Allen | 3 | 0 | 1.69 |
Linda Bushnell | 4 | 13 | 2.04 |
Wenke Lee | 5 | 9351 | 628.83 |
Radha Poovendran | 6 | 2577 | 168.26 |