Abstract | ||
---|---|---|
In Apache Spark cloud computing environment, computation capability of each node varies, together with the data size and uncertainties in application execution, these result in the differences in task execution time of a job. In order to enhance the accuracy in load execution time prediction, and reasonably guide the user to apply for Spark cluster resources, this paper researches on the execution flow of Spark job, collects the load time consumption index, and puts forward the time index fusion calculation scheme. And then, this paper researches on the multiple linear regression model and support vector machine model to explore the payload execution time and CPU Core, inputting data volume, memory size and other performance indicators. Based on the two models above, this paper proposes a Standard Regression Coefficient-based Weighted Support Vector Regression time prediction model (SRC-WSVR). Finally, through comparing the results from the prediction model proposed in this paper with conventional regression prediction model and Standard Support Vector Machine model, it proves that SRC-WSVR has a higher prediction accuracy, which can provide valid data reference for predicting Spark resource consumption. |
Year | DOI | Venue |
---|---|---|
2018 | 10.3233/JHS-170580 | JOURNAL OF HIGH SPEED NETWORKS |
Keywords | Field | DocType |
Weighted Support Vector Regression Machine, Standard Regression Coefficient, execution time prediction, performance benchmark, keyword five, Spark | Computer architecture,Spark (mathematics),Computer science,Real-time computing,Cloud computing | Journal |
Volume | Issue | ISSN |
24 | 1 | 0926-6801 |
Citations | PageRank | References |
1 | 0.36 | 21 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Peng Li | 1 | 275 | 69.71 |
Lu Dong | 2 | 107 | 10.38 |
he xu | 3 | 36 | 22.25 |
Ting Fung Lau | 4 | 1 | 0.70 |