Abstract | ||
---|---|---|
Exploiting the full computational power of current hierarchical multiprocessor machines requires a very careful distribution
of threads and data among the underlying non-uniform architecture so as to avoid remote memory access penalties. Directive-based
programming languages such as OpenMP, can greatly help to perform such a distribution by providing programmers with an easy
way to structure the parallelism of their application and to transmit this information to the runtime system. Our runtime,
which is based on a multi-level thread scheduler combined with a NUMA-aware memory manager, converts this information into
scheduling hints related to thread-memory affinity issues. These hints enable dynamic load distribution guided by application structure and
hardware topology, thus helping to achieve performance portability. Several experiments show that mixed solutions (migrating
both threads and data) outperform work-stealing based balancing strategies and next-touch-based data distribution policies. These techniques provide insights about additional optimizations. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/s10766-010-0136-3 | International Journal of Parallel Programming |
Keywords | Field | DocType |
memory management,memory,multi core,programming language,numa | Architecture,Computer science,Scheduling (computing),Parallel computing,Multiprocessing,Thread (computing),Memory management,Software portability,Multi-core processor,Runtime system | Journal |
Volume | Issue | ISSN |
38 | 5-6 | 1573-7640 |
Citations | PageRank | References |
36 | 1.54 | 13 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
François Broquedis | 1 | 157 | 11.99 |
Nathalie Furmento | 2 | 300 | 27.19 |
Brice Goglin | 3 | 226 | 21.78 |
Pierre-andré Wacrenier | 4 | 766 | 36.69 |
Raymond Namyst | 5 | 1405 | 83.04 |