Title
Comparison of Prediction Model Performance Updating Protocols - Using a Data-Driven Testing Procedure to Guide Updating.
Abstract
In evolving clinical environments, the accuracy of prediction models deteriorates over time. Guidance on the design of model updating policies is limited, and there is limited exploration of the impact of different policies on future model performance and across different model types. We implemented a new data-driven updating strategy based on a nonparametric testing procedure and compared this strategy to two baseline approaches in which models are never updated or fully refit annually. The test-based strategy generally recommended intermittent recalibration and delivered more highly calibrated predictions than either of the baseline strategies. The test-based strategy highlighted differences in the updating requirements between logistic regression, L1-regularized logistic regression, random forest, and neural network models, both in terms of the extent and timing of updates. These findings underscore the potential improvements in using a data-driven maintenance approach over "one-size fits all" to sustain more stable and accurate model performance over time.
Year
Venue
DocType
2019
AMIA
Conference
Volume
ISSN
Citations 
2019
1942-597X
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Sharon E Davis121.07
Robert A Greevy221.41
Thomas A. Lasko319420.36
Colin Walsh4236.12
Michael E. Matheny520233.36