Abstract | ||
---|---|---|
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of â聙聵Big Memoryâ聙聶â聙聰â聙聰an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solely for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1177/1094342010369116 | IJHPCA |
Keywords | Field | DocType |
big memory,scalability evaluation,performance issue,node kernel,transparent memory space,custom memory benchmarks,new memory subsystem,o node application,blue gene linux,node task,memory performance issue,low frequency,genes,simulation,tlb,design,linux,radio telescopes,memory management,performance,radio telescope,evaluation,implementation | Interleaved memory,Extended memory,Uniform memory access,Physical address,Computer science,Parallel computing,Cache-only memory architecture,Memory management,Flat memory model,Distributed shared memory,Operating system | Journal |
Volume | Issue | ISSN |
25 | 2 | 1094-3420 |
Citations | PageRank | References |
9 | 0.78 | 14 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kazutomo Yoshii | 1 | 249 | 18.53 |
Kamil Iskra | 2 | 642 | 46.46 |
Harish Naik | 3 | 18 | 1.84 |
Pete Beckman | 4 | 822 | 48.04 |
P. Chris Broekema | 5 | 35 | 4.65 |