Abstract | ||
---|---|---|
Urban society relies heavily on critical infrastructure (CI) such as power and water systems. The anticipated prosperity and the national security of society depend on the ability to understand, measure and analyse the vulnerabilities and interdependencies of this system of infrastructures. Only then can emergency responders (ER) react quickly and effectively to any major disruption that the system might face. In this paper, we propose a model to train a reinforcement learning (RL) agent that is able to optimise resource usage following an infrastructure disruption. The novelty of our approach is the use of dynamic programming techniques to build an agent that is able to learn from experience, where the experience is generated by a simulator. The goal of the agent is to maximise an output, which in our case is the number of discharged patients (DP) from hospitals or on-site emergency units. We show that by exposing such an intelligent agent to a large sequence of simulated disaster scenarios, we can capture enough experience to enable the agent to make informed decisions. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1504/IJCIS.2014.062968 | INTERNATIONAL JOURNAL OF CRITICAL INFRASTRUCTURES |
Keywords | Field | DocType |
artificial intelligence, critical infrastructure, disaster response, i2Sim real-time simulator, reinforcement-learning agent, responsive crisis management, resource allocation, agent-based modelling, decision support system | Interdependence,Intelligent agent,Computer security,Emergency management,Decision support system,Critical infrastructure,Multi-agent system,Resource allocation,Engineering,Reinforcement learning | Journal |
Volume | Issue | ISSN |
10 | 2 | 1475-3219 |
Citations | PageRank | References |
1 | 0.40 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mohammed Talat Khouj | 1 | 1 | 0.40 |
Sarbjit Sarkaria | 2 | 1 | 0.40 |
José R. Martí | 3 | 1 | 0.40 |