Title
USCL: Pretraining Deep Ultrasound Image Diagnosis Model Through Video Contrastive Representation Learning
Abstract
Most deep neural networks (DNNs) based ultrasound (US) medical image analysis models use pretrained backbones (e.g., ImageNet) for better model generalization. However, the domain gap between natural and medical images causes an inevitable performance bottleneck. To alleviate this problem, an US dataset named US-4 is constructed for direct pretraining on the same domain. It contains over 23,000 images from four US video sub-datasets. To learn robust features from US-4, we propose an US semi-supervised contrastive learning method, named USCL, for pretraining. In order to avoid high similarities between negative pairs as well as mine abundant visual features from limited US videos, USCL adopts a sample pair generation method to enrich the feature involved in a single step of contrastive optimization. Extensive experiments on several downstream tasks show the superiority of USCL pretraining against ImageNet pretraining and other state-of-the-art (SOTA) pretraining approaches. In particular, USCL pretrained backbone achieves fine-tuning accuracy of over 94% on POCUS dataset, which is 10% higher than 84% of the ImageNet pretrained model. The source codes of this work are available at https://github.com/983632847/USCL.
Year
DOI
Venue
2021
10.1007/978-3-030-87237-3_60
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VIII
Keywords
DocType
Volume
Ultrasound, Pretrained model, Contrastive learning
Conference
12908
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Yixiong Chen100.34
Chunhui Zhang2246.30
Li Liu311.03
Cheng Feng400.34
Changfeng Dong500.34
Yongfang Luo610.69
Xiang Wan701.01