Abstract | ||
---|---|---|
Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1145/3015569 | Communications of the ACM |
Keywords | Field | DocType |
trillion particle,extreme scaling,scalable performance,parallel efficiency,hacc design concept,largest scale,cosmological survey,largest cosmological benchmark,blue gene system,gpu system,diverse architecture,sustained performance | Central processing unit,Supercomputer,Computer science,Parallel computing,Blue gene,Scaling,Grid,Distributed computing,Scalability | Conference |
Volume | Issue | ISSN |
60 | 1 | 0001-0782 |
Citations | PageRank | References |
25 | 1.00 | 2 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Salman Habib | 1 | 98 | 15.24 |
Vitali Morozov | 2 | 142 | 9.11 |
Nicholas Frontiere | 3 | 61 | 4.13 |
Hal Finkel | 4 | 114 | 18.43 |
Adrian Pope | 5 | 67 | 4.45 |
Katrin Heitmann | 6 | 144 | 14.49 |