Title
Inpainting Cropped Diffusion MRI Using Deep Generative Models.
Abstract
Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.
Year
DOI
Venue
2020
10.1007/978-3-030-59354-4_9
PRIME@MICCAI
DocType
Volume
Citations 
Conference
12329
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Rafi Ayub100.34
Qingyu Zhao224.09
M. J. Meloy300.34
Edith V Sullivan415019.25
Adolf Pfefferbaum517420.61
Ehsan Adeli Mosabbeb626139.27
Kilian Pohl757746.78