Title
Can traditional programming bridge the ninja performance gap for parallel computing applications?
Abstract
Current processor trends of integrating more cores with wider SIMD units, along with a deeper and complex memory hierarchy, have made it increasingly more challenging to extract performance from applications. It is believed by some that traditional approaches to programming do not apply to these modern processors and hence radical new languages must be discovered. In this paper, we question this thinking and offer evidence in support of traditional programming methods and the performance-vs-programming effort effectiveness of common multi-core processors and upcoming many-core architectures in delivering significant speedup, and close-to-optimal performance for commonly used parallel computing workloads. We first quantify the extent of the "Ninja gap", which is the performance gap between naively written C/C++ code that is parallelism unaware (often serial) and best-optimized code on modern multi-/many-core processors. Using a set of representative throughput computing benchmarks, we show that there is an average Ninja gap of 24X (up to 53X) for a recent 6-core Intel® Core™ i7 X980 Westmere CPU, and that this gap if left unaddressed will inevitably increase. We show how a set of well-known algorithmic changes coupled with advancements in modern compiler technology can bring down the Ninja gap to an average of just 1.3X. These changes typically require low programming effort, as compared to the very high effort in producing Ninja code. We also discuss hardware support for programmability that can reduce the impact of these changes and even further increase programmer productivity. We show equally encouraging results for the upcoming Intel® Many Integrated Core architecture (Intel® MIC) which has more cores and wider SIMD. We thus demonstrate that we can contain the otherwise uncontrolled growth of the Ninja gap and offer a more stable and predictable performance growth over future architectures, offering strong evidence that radical language changes are not required.
Year
DOI
Venue
2012
10.1145/2742910
Communications of the ACM
Keywords
DocType
Volume
average ninja gap,processors,low programming effort,predictable performance growth,ninja performance gap,systems,parallel computing application,close-to-optimal performance,ninja code,performance gap,best-optimized code,ninja gap,modern compiler technology,traditional programming bridge,high effort,program optimization,multi core processor,topology,intel many integrated core architecture,optimization,parallel computer,parallel processing,benchmark testing,parallel programming,diameter,high performance computing,multicore processors,bandwidth,programming
Conference
58
Issue
ISSN
Citations 
5
0001-0782
18
PageRank 
References 
Authors
0.91
25
8
Name
Order
Citations
PageRank
Nadathur Satish1202099.88
Changkyu Kim2184898.89
Jatin Chhugani3145279.32
Hideki Saito417714.88
Rakesh Krishnaiyer517419.65
Mikhail Smelyanskiy6116065.96
Milind Girkar745136.79
Pradeep K. Dubey83432292.69