Title
Computed tomography vertebral segmentation from multi-vendor scanner data
Abstract
Automatic medical image segmentation is a crucial procedure for computer-assisted surgery. Especially, three-dimensional reconstruction of medical images of the surgical targets can be accurate in fine anatomical structures with optimal image segmentation, thus leading to successful surgical results. However, the performance of the automatic segmentation algorithm highly depends on the consistent properties of medical images. To address this issue, we propose a model for standardizing computed tomography (CT) images. Hence, our CT image-to-image translation network enables diverse CT images (non-standard images) to be translated to images with identical features (standard images) for the more precise performance of U-Net segmentation. Specifically, we combine an image-to-image translation network with a generative adversarial network, consisting of a residual block-based generative network and the discriminative network. Also, we utilize the feature extracting layers of VGG-16 to extract the style of the standard image and the content of the non-standard image. Moreover, for precise diagnosis and surgery, the conservation of anatomical information of the non-standard image is also essential during the synthesis of medical images. Therefore, for performance evaluation, largely three evaluation methods are employed: (i) visualization of the geometrical matching between the non-standard (content) and synthesized images to verify the maintenance of the anatomical structures; (ii) measuring numerical results using image similarity evaluation metrics; and (iii) assessing the performance of U-Net segmentation with our synthesized images. Specifically, we investigate that our model network can transfer the texture from standard CT images to diverse CT images (non-standard) scanned by different scanners and scan protocols. Also, we verify that the synthesized images can retain the global pose and fine structures of the non-standard images. We also compare the predicted segmentation result of the non-standard image and the synthesized image generated from its non-standard image via our proposed network. In addition, the performance of our proposed model is compared with the windowing process, where the window parameter of the standard image is applied to the non-standard image to ensure that our model outperforms the windowing process.
Year
DOI
Venue
2022
10.1093/jcde/qwac072
JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING
Keywords
DocType
Volume
computed tomography image, style transfer, super-resolution, generative adversarial networks, vertebral segmentation, image-to-image translation, U-Net
Journal
9
Issue
ISSN
Citations 
4
2288-4300
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Chaewoo Kim100.34
Oguzcan Bekar200.34
Hyunseok Seo300.34
M. S. Park412.38
Deukhee Lee500.34