Title | ||
---|---|---|
What went wrong and when? Instance-wise feature importance for time-series black-box models |
Abstract | ||
---|---|---|
Explanations of time series models are useful for high stakes applications like healthcare but have received little attention in machine learning literature. We propose FIT, a framework that evaluates the importance of observations for a multivariate time-series black-box model by quantifying the shift in the predictive distribution over time. FIT defines the importance of an observation based on its contribution to the distributional shift under a KL-divergence that contrasts the predictive distribution against a counterfactual where the rest of the features are unobserved. We also demonstrate the need to control for time-dependent distribution shifts. We compare with state-of-the-art baselines on simulated and real-world clinical data and demonstrate that our approach is superior in identifying important time points and observations throughout the time series. |
Year | Venue | DocType |
---|---|---|
2020 | NIPS 2020 | Conference |
Volume | Citations | PageRank |
33 | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sana Tonekaboni | 1 | 1 | 1.70 |
Shalmali Joshi | 2 | 1 | 1.02 |
Kieran R. Campbell | 3 | 3 | 1.84 |
Duvenaud, David K. | 4 | 17 | 4.03 |
Anna Goldenberg | 5 | 276 | 26.12 |