Title
What Makes Transfer Learning Work for Medical Images: Feature Reuse & Other Factors
Abstract
Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and im-age characteristics between the domains. However, it is un-clear what factors determine whether - and to what extent- transfer learning to the medical domain is useful. The long- standing assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image bench-mark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and tar-get domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00901
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Transfer/low-shot/long-tail learning, Deep learning architectures and techniques, Medical,biological and cell microscopy
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Christos Matsoukas100.34
Johan Fredin Haslum200.34
Moein Sorkhei300.34
Magnus Söderberg400.34
Kevin Smith5243088.78