Abstract | ||
---|---|---|
Understanding the extent to which computational results can change across platforms, compilers, and compiler flags can go a long way toward supporting reproducible experiments. In this work, we offer the first automated testing aid called FLiT (Floating-point Litmus Tester) that can show how much these results can vary for any user-given collection of computational kernels. Our approach is to take a collection of these kernels, disperse them across a collection of compute nodes (each with a different architecture), have them compiled and run, and bring the results to a central SQL database for deeper analysis. Properly conducting these activities requires a careful selection (or design) of these kernels, input generation methods for them, and the ability to interpret the results in meaningful ways. The results in this paper are meant to inform two different communities: (a) those interested in seeking higher performance by considering “IEEE unsafe” optimizations, but then want to understand how much result variability to expect, and (b) those interested in standardizing compiler flags and their meanings, so that one may safely port code across generations of compilers and architectures. By releasing FLiT, we have also opened up the possibility of all HPC developers using it as a common resource as well as contributing back interesting test kernels as well as best practices, thus extending the floating-point result-consistency workload we contribute. This is the first such workload and result-consistency tester underlying floating-point reproducibility of which we are aware. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/IISWC.2017.8167780 | 2017 IEEE International Symposium on Workload Characterization (IISWC) |
Keywords | Field | DocType |
kernel selection,test kernels,HPC developers,IEEE unsafe optimizations,cross-platform floating-point result-consistency tester,floating-point reproducibility,floating-point result-consistency workload,input generation methods,central SQL database,compute nodes,computational kernels,Floating-point Litmus Tester,FLiT,automated testing aid,reproducible experiments,compiler flags | Kernel (linear algebra),Architecture,Best practice,Software engineering,Floating point,Computer science,Workload,Parallel computing,Compiler,Sql database,Cross-platform | Conference |
ISBN | Citations | PageRank |
978-1-5386-1234-7 | 2 | 0.38 |
References | Authors | |
7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Geoffrey Sawaya | 1 | 3 | 0.76 |
Michael Bentley | 2 | 3 | 1.76 |
Ian Briggs | 3 | 26 | 4.56 |
Ganesh Gopalakrishnan | 4 | 1619 | 130.11 |
Dong H. Ahn | 5 | 325 | 22.61 |