Abstract | ||
---|---|---|
Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism, do not preserve the asymptotic work complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the blow-up, now have the correct asymptotic work complexity. In some cases, the asymptotic complexity of the vectorised program is even better than the original. |
Year | DOI | Venue |
---|---|---|
2012 | 10.1145/2364527.2364564 | ICFP |
Keywords | Field | DocType |
source program,severe blow-up,realistic program,asymptotic complexity,present benchmarks,efficient higher-order vectorisation,nested array,asymptotic work complexity,nested data parallelism,index space,correct asymptotic work complexity,higher order,data parallelism,sparse matrix,indexation,languages | Programming language,Flattening,Computer science,Theoretical computer science,Multiplication,Haskell,Root cause,Asymptotic complexity,Parallel computing,Spacetime,Algorithm,Data parallelism,Nested arrays | Conference |
Volume | Issue | ISSN |
47 | 9 | 0362-1340 |
Citations | PageRank | References |
6 | 0.51 | 11 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ben Lippmeier | 1 | 135 | 7.99 |
Manuel M.T. Chakravarty | 2 | 170 | 7.02 |
Gabriele Keller | 3 | 657 | 36.02 |
Roman Leshchinskiy | 4 | 326 | 14.89 |
Simon L. Peyton Jones | 5 | 5036 | 381.19 |