Abstract | ||
---|---|---|
Prediction intervals in supervised machine learning bound the region where the true outputs of new samples may fall. They are necessary in the task of separating reliable predictions of a trained model from near random guesses, minimizing the rate of false positives, and other problem-specific tasks in applied machine learning. Many real problems have heteroscedastic stochastic outputs, which explains the need of input-dependent prediction intervals. This paper proposes to estimate the input-dependent prediction intervals by a separate extreme learning machine model, using variance of its predictions as a correction term accounting for the model uncertainty. The variance is estimated from the model’s linear output layer with a weighted Jackknife method. The methodology is very fast, robust to heteroscedastic outputs, and handles both extremely large datasets and insufficient amount of training data. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/s13042-017-0777-2 | International Journal of Machine Learning and Cybernetics |
Keywords | Field | DocType |
ELM, Heteroscedastic, Prediction interval, Confidence interval, variance estimation, False positives, Coverage | Training set,Heteroscedasticity,Jackknife resampling,Variance estimation,Computer science,Extreme learning machine,Prediction interval,Confidence interval,Statistics,False positive paradox | Journal |
Volume | Issue | ISSN |
10 | 5 | 1868-808X |
Citations | PageRank | References |
0 | 0.34 | 30 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Anton Akusok | 1 | 143 | 10.72 |
Yoan Miche | 2 | 1054 | 54.56 |
Kaj-Mikael Björk | 3 | 148 | 16.40 |
Amaury Lendasse | 4 | 1876 | 126.03 |