Title
Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions.
Abstract
Automated medical image segmentation, specifically using deep learning, has shown outstanding performance in semantic segmentation tasks. However, these methods rarely quantify their uncertainty, which may lead to errors in downstream analysis. In this work we propose to use Bayesian neural networks to quantify uncertainty within the domain of semantic segmentation. We also propose a method to convert voxel-wise segmentation uncertainty into volumetric uncertainty, and calibrate the accuracy and reliability of confidence intervals of derived measurements. When applied to a tumour volume estimation application, we demonstrate that by using such modelling of uncertainty, deep learning systems can be made to report volume estimates with well-calibrated error-bars, making them safer for clinical use. We also show that the uncertainty estimates extrapolate to unseen data, and that the confidence intervals are robust in the presence of artificial noise. This could be used to provide a form of quality control and quality assurance, and may permit further adoption of deep learning tools in the clinic.
Year
DOI
Venue
2018
10.1007/978-3-030-00928-1_78
Lecture Notes in Computer Science
DocType
Volume
ISSN
Conference
11070
0302-9743
Citations 
PageRank 
References 
1
0.35
6
Authors
5
Name
Order
Citations
PageRank
Zach Eaton-Rosen1657.69
Felix J. S. Bragman292.64
Sotirios Bisdas331.52
Sébastien Ourselin42499237.61
Cardoso M. Jorge56413.70