Title
Faster Jobs in Distributed Data Processing using Multi-Task Learning.
Abstract
Slow running or straggler tasks in distributed processing frameworks [1, 2] can be 6 to 8 times slower than the median task in a job on a production cluster [3], despite existing mitigation techniques. This leads to extended job completion times, inefficient use of resources, and increased costs. Recently, proactive straggler avoidance techniques [4] have explored the use of predictive models to improve task scheduling. However, to capture node and workload variability, separate models are built for every node and workload, requiring the time consuming collection of training data and limiting the applicability to new nodes and workloads. In this work, we observe that predictors for similar nodes or workloads are likely to be similar and can share information, suggesting a multi-task learning (MTL) based approach. We generalize the MTL formulation of [5] to capture commonalities in arbitrary groups. Using our formulation to predict stragglers allows us to reduce job completion times by up to 59% over Wrangler [4]. This large reduction arises from a 7 point increase in prediction accuracy. Further, we can get equal or better accuracy than [4] using a sixth of the training data, thus bringing the training time down from 4 hours to about 40 minutes. In addition, our formulation reduces the number of parameters by grouping our parameters into nodeand workload-dependent factors. This helps us generalize to tasks with insufficient data and achieve significant gains over a naive MTL formulation [5].
Year
Venue
Field
2015
SDM
Training set,Data processing,Multi-task learning,Scheduling (computing),Workload,Computer science,Artificial intelligence,Limiting,Machine learning,Distributed computing
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
23
4
Name
Order
Citations
PageRank
Neeraja J. Yadwadkar1704.43
Bharath Hariharan2105265.90
Joseph E. Gonzalez32219102.68
Randy H. Katz4168193018.89