Title
Work-first and help-first scheduling policies for async-finish task parallelism
Abstract
Multiple programming models are emerging to address an increased need for dynamic task parallelism in applications for multicore processors and shared-address-space parallel computing. Examples include OpenMP 3.0, Java Concurrency Utilities, Microsoft Task Parallel Library, Intel Thread Building Blocks, Cilk, X10, Chapel, and Fortress. Scheduling algorithms based on work stealing, as embodied in Cilk's implementation of dynamic spawn-sync parallelism, are gaining in popularity but also have inherent limitations. In this paper, we address the problem of efficient and scalable implementation of X10's async-finish task parallelism, which is more general than Cilk's spawn-sync parallelism. We introduce a new work-stealing scheduler with compiler support for async-finish task parallelism that can accommodate both work-first and help-first scheduling policies. Performance results on two different multicore SMP platforms show significant improvements due to our new work-stealing algorithm compared to the existing work-sharing scheduler for X10, and also provide insights on scenarios in which the help-first policy yields better results than the work-first policy and vice versa.
Year
DOI
Venue
2009
10.1109/IPDPS.2009.5161079
IPDPS
Keywords
Field
DocType
shared-address-space parallel computing,x10,scheduling,dynamic spawn-sync parallelism,dynamic task parallelism,chapel,parallel programming,help-first policy yield,spawn-sync parallelism,help-first scheduling policy,fortress,task analysis,multicore processor,existing work-sharing scheduler,async-finish task parallelism,compiler support,openmp 3.0,intel thread building blocks,cilk,java concurrency utilities,programming model,new work-stealing algorithm,help-first scheduling policies,work stealing,work-first scheduling policies,different multicore smp platform,microsoft task parallel library,scheduling algorithm,cloning,concurrent computing,multicore processing,dynamic programming,pediatrics,java,synchronization,multicore processors,parallel processing,parallel computer,data mining
Instruction-level parallelism,Parallel Extensions,Computer architecture,Implicit parallelism,Task parallelism,Computer science,Parallel computing,Data parallelism,Scalable parallelism,Work stealing,Cilk,Distributed computing
Conference
ISSN
ISBN
Citations 
1530-2075 E-ISBN : 978-1-4244-3750-4
978-1-4244-3750-4
73
PageRank 
References 
Authors
2.89
11
4
Name
Order
Citations
PageRank
Yi Guo141444.10
Rajkishore Barik256243.70
Raghavan Raman323510.70
Vivek Sarkar44318409.41