Title
MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer
Abstract
Owing to the limitations of imaging sensors, it is challenging to obtain a medical image that simultaneously contains functional metabolic information and structural tissue details. Multimodal medical image fusion, an effective way to merge the complementary information in different modalities, has become a significant technique to facilitate clinical diagnosis and surgical navigation. With powerful feature representation ability, deep learning (DL)-based methods have improved such fusion results but still have not achieved satisfactory performance. Specifically, existing DL-based methods generally depend on convolutional operations, which can well extract local patterns but have limited capability in preserving global context information. To compensate for this defect and achieve accurate fusion, we propose a novel unsupervised method to fuse multimodal medical images via a multiscale adaptive Transformer termed MATR. In the proposed method, instead of directly employing vanilla convolution, we introduce an adaptive convolution for adaptively modulating the convolutional kernel based on the global complementary context. To further model long-range dependencies, an adaptive Transformer is employed to enhance the global semantic extraction capability. Our network architecture is designed in a multiscale fashion so that useful multimodal information can be adequately acquired from the perspective of different scales. Moreover, an objective function composed of a structural loss and a region mutual information loss is devised to construct constraints for information preservation at both the structural-level and the feature-level. Extensive experiments on a mainstream database demonstrate that the proposed method outperforms other representative and state-of-the-art methods in terms of both visual quality and quantitative evaluation. We also extend the proposed method to address other biomedical image fusion issues, and the pleasing fusion results illustrate that MATR has good generalization capability. The code of the proposed method is available at https://github.com/tthinking/MATR.
Year
DOI
Venue
2022
10.1109/TIP.2022.3193288
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Transformers, Image fusion, Single photon emission computed tomography, Magnetic resonance imaging, Transforms, Medical diagnostic imaging, Task analysis, Image fusion, biomedical image, transformer, adaptive convolution, deep learning
Journal
31
Issue
ISSN
Citations 
1
1057-7149
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Wei Tang100.34
Fazhi He254041.02
Yu Liu349230.80
Yansong Duan433.79