Title
Generative Adversarial Network Using Multi-modal Guidance for Ultrasound Images Inpainting.
Abstract
Medical image inpainting not only helps computer-aided diagnosis systems to eliminate the interference of irrelevant information in medical images, but also helps doctors to prognosis and evaluate the operation by blocking and inpainting the lesion area. However, the existing diffusion-based or patch-based methods have poor performance on complex images with non-repeating structures, and the generate-based methods lack sufficient priori knowledge, which leads to the inability to generate repair content with reasonable structure and visual reality. This paper proposes a generative adversarial network via multi-modal guidance (MMG-GAN), which is composed of the multi-modal guided network and the fine inpainting network. The multi-modal guided network obtains the low-frequency structure, high-frequency texture and high-order semantic of original image through the structure reconstruction generator, texture refinement generator and semantic guidance generator. Utilizing the potential attention mechanism of convolution operation, the fine inpainting network adaptively fuses features to achieve realistic inpainting. By adding the multi-modal guided network, MMG-GAN realizes the inpainting content with reasonable structure, reliable texture and consistent semantic. Experimental results on Thyroid Ultrasound Image (TUI) dataset and TN-SCUI2020 dataset show that our method outperforms other state-of-the-art methods in terms of PSNR, SSIM, and relative l1 measures. Code and TUI dataset will be made publicly available.
Year
DOI
Venue
2020
10.1007/978-3-030-63830-6_29
ICONIP (1)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Ruiguo Yu142.87
Jiachen Hu200.34
Xi Wei301.01
Mei Yu405.41
Jialin Zhu503.04
Jie Gao601.35
Zhiqiang Liu711624.93
Xuewei Li885.90