Abstract | ||
---|---|---|
High-level data-structures are an important foundation for most applications. With the rise of multicores, there is a trend of supporting data-parallel collection operations in general purpose programming languages. However, these operations often incur high-level abstraction and scheduling penalties. We present a generic data-parallel collections design based on work-stealing for shared-memory architectures that overcomes abstraction penalties through call site specialization of data-parallel operation instances. Moreover, we introduce work-stealing iterators that allow more fine-grained and efficient work-stealing. By eliminating abstraction penalties and making work-stealing data-structure-aware we achieve several dozen times better performance compared to existing JVM-based approaches. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/PDP.2015.65 | PDP |
Keywords | Field | DocType |
data parallelism | Programming language,Abstraction,General purpose,Computer science,Scheduling (computing),Non-blocking algorithm,Parallel computing,Call site,Data parallelism,Work stealing,Distributed computing | Conference |
ISSN | Citations | PageRank |
1066-6192 | 2 | 0.37 |
References | Authors | |
4 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Aleksandar Prokopec | 1 | 163 | 13.56 |
Dmitry Petrashko | 2 | 9 | 1.56 |
Martin Odersky | 3 | 2261 | 170.39 |