Title
ISSMF: Integrated semantic and spatial information of multi-level features for automatic segmentation in prenatal ultrasound images
Abstract
As an effective way of routine prenatal diagnosis, ultrasound (US) imaging has been widely used recently. Biometrics obtained from the fetal segmentation shed light on fetal health monitoring. However, the segmentation in US images has strict requirements for sonographers on accuracy, making this task quite time-consuming and tedious. In this paper, we use DeepLabv3+ as the backbone and propose an Integrated Semantic and Spatial Information of Multi-level Features (ISSMF) based network to achieve the automatic and accurate segmentation of four parts of the fetus in US images while most of the previous works only segment one or two parts. Our contributions are threefold. First, to incorporate semantic information of high-level features and spatial information of low-level features of US images, we introduce a multi-level feature fusion module to integrate the features at different scales. Second, we propose to leverage the content-aware reassembly of features (CARAFE) upsampler to deeply explore the semantic and spatial information of multi-level features. Third, in order to alleviate performance degradation caused by batch normalization (BN) when batch size is small, we use group normalization (GN) instead. Experiments on four parts of fetus in US images show that our method outperforms the U-Net, DeepLabv3+ and the U-Net++ and the biometric measurements based on our segmentation results are pretty close to those derived from sonographers with ten-year work experience.
Year
DOI
Venue
2022
10.1016/j.artmed.2022.102254
ARTIFICIAL INTELLIGENCE IN MEDICINE
Keywords
DocType
Volume
Prenatal diagnosis, Ultrasound imaging, Automatic segmentation, Multi-level feature fusion, DeepLabv3+
Journal
125
ISSN
Citations 
PageRank 
0933-3657
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Yihao Sun100.34
Hongjian Yang200.34
Jiliu Zhou345058.21
Yan Wang416828.11