Title
Bayesian Volumetric Autoregressive Generative Models for Better Semisupervised Learning.
Abstract
Deep generative models are rapidly gaining traction in medical imaging. Nonetheless, most generative architectures struggle to capture the underlying probability distributions of volumetric data, exhibit convergence problems, and offer no robust indices of model uncertainty. By comparison, the autoregressive generative model PixelCNN can be extended to volumetric data with relative ease, it readily attempts to learn the true underlying probability distribution and it still admits a Bayesian reformulation that provides a principled framework for reasoning about model uncertainty. Our contributions in this paper are two fold: first, we extend PixelCNN to work with volumetric brain magnetic resonance imaging data. Second, we show that reformulating this model to approximate a deep Gaussian process yields a measure of uncertainty that improves the performance of semi-supervised learning, in particular classification performance in settings where the proportion of labelled data is low. We quantify this improvement across classification, regression, and semantic segmentation tasks, training and testing on clinical magnetic resonance brain imaging data comprising T1-weighted and diffusion-weighted sequences.
Year
DOI
Venue
2019
10.1007/978-3-030-32251-9_47
Lecture Notes in Computer Science
Keywords
DocType
Volume
Generative,Semi-supervised,Bayesian,Autoregressive
Conference
11767
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Guilherme Pombo100.34
Robert Gray200.68
Thomas Varsavsky311.42
John Ashburner43589382.57
Parashkev Nachev532.81