Abstract | ||
---|---|---|
This paper introduces PMV (Perturbed Model Validation), a new technique to validate model relevance and detect overfitting or underfitting. PMV operates by injecting noise to the training data, re-training the model against the perturbed data, then using the training accuracy decrease rate to assess model relevance. A larger decrease rate indicates better concept-hypothesis fit. We realise PMV by using label flipping to inject noise, and evaluate it on four real-world datasets (breast cancer, adult, connect-4, and MNIST) and three synthetic datasets in the binary classification setting. The results reveal that PMV selects models more precisely and in a more stable way than cross-validation, and effectively detects both overfitting and underfitting. |
Year | Venue | DocType |
---|---|---|
2019 | arXiv: Learning | Journal |
Volume | Citations | PageRank |
abs/1905.10201 | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jie M. Zhang | 1 | 0 | 0.34 |
Earl T. Barr | 2 | 468 | 15.46 |
Benjamin Guedj | 3 | 9 | 8.82 |
Mark Harman | 4 | 10264 | 389.82 |
John Shawe-Taylor | 5 | 11879 | 1518.73 |