Abstract | ||
---|---|---|
Scientific discovery increasingly depends on complex workflows consisting of multiple phases and sometimes millions of parallelizable tasks or pipelines. These workflows access storage resources for a variety of purposes, including preprocessing, simulation output, and postprocessing steps. Unfortunately, most workflow models focus on the scheduling and allocation of computational resources for tasks while the impact on storage systems remains a secondary objective and an open research question. I/O performance is not usually accounted for in workflow telemetry reported to users. In this paper, we present an approach to augment the I/O efficiency of the individual tasks of workflows by combining workflow description frameworks with system I/O telemetry data. A conceptual architecture and a prototype implementation for HPC data center deployments are introduced. We also identify and discuss challenges that will need to be addressed by workflow management and monitoring systems for HPC in the future. We demonstrate how real-world applications and workflows could benefit from the approach, and we show how the approach helps communicate performance-tuning guidance to users. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/PDSW-DISCS.2018.00012 | PDSW-DISCS@SC |
Keywords | DocType | ISBN |
Task analysis,Data models,Pipelines,Tools,Engines,Monitoring,Telemetry | Conference | 978-1-7281-0192-7 |
Citations | PageRank | References |
3 | 0.41 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jakob Lüttgau | 1 | 3 | 0.41 |
Shane Snyder | 2 | 64 | 8.38 |
Philip H. Carns | 3 | 964 | 62.51 |
Justin M. Wozniak | 4 | 464 | 35.32 |
Julian M. Kunkel | 5 | 3 | 0.75 |
Thomas Ludwig | 6 | 282 | 34.89 |