Title
Barrier-Aware Warp Scheduling for Throughput Processors.
Abstract
Parallel GPGPU applications rely on barrier synchronization to align thread block activity. Few prior work has studied and characterized barrier synchronization within a thread block and its impact on performance. In this paper, we find that barriers cause substantial stall cycles in barrier-intensive GPGPU applications although GPGPUs employ lightweight hardware-support barriers. To help investigate the reasons, we define the execution between two adjacent barriers of a thread block as a warp-phase. We find that the execution progress within a warp-phase varies dramatically across warps, which we call warp-phase-divergence. While warp-phase-divergence may result from execution time disparity among warps due to differences in application code or input, and/or shared resource contention, we also pinpoint that warp-phase-divergence may result from warp scheduling. To mitigate barrier induced stall cycle inefficiency, we propose barrier-aware warp scheduling (BAWS). It combines two techniques to improve the performance of barrier-intensive GPGPU applications. The first technique, most-waiting-first (MWF), assigns a higher scheduling priority to the warps of a thread block that has a larger number of warps waiting at a barrier. The second technique, critical-fetch-first (CFF), fetches instructions from the warp to be issued by MWF in the next cycle. To evaluate the efficiency of BAWS, we consider 13 barrier-intensive GPGPU applications, and we report that BAWS speeds up performance by 17% and 9% on average (and up to 35% and 30%) over loosely-round-robin (LRR) and greedy-then-oldest (GTO) warp scheduling, respectively. We compare BAWS against recent concurrent work SAWS, finding that BAWS outperforms SAWS by 7% on average and up to 27%. For non-barrier-intensive workloads, we demonstrate that BAWS is performance-neutral compared to GTO and SAWS, while improving performance by 5.7% on average (and up to 22%) compared to LRR. BAWS' hardware cost is limited to 6 bytes per streaming multiprocessor (SM).
Year
DOI
Venue
2016
10.1145/2925426.2926267
ICS
Field
DocType
Citations 
Byte,Synchronization,Scheduling (computing),Computer science,Parallel computing,Real-time computing,Multiprocessing,Thread (computing),Power gating,General-purpose computing on graphics processing units,Throughput
Conference
12
PageRank 
References 
Authors
0.56
20
8
Name
Order
Citations
PageRank
Yu-xi Liu1214.70
Zhibin Yu214117.67
Lieven Eeckhout32863195.11
Vijay Janapa Reddi42931140.26
Yingwei Luo531541.30
Xiao-lin Wang6764.32
Zhenlin Wang7916.68
Z. Chen83443271.62