Title
MFST: Multi-Modal Feature Self-Adaptive Transformer for Infrared and Visible Image Fusion
Abstract
Infrared and visible image fusion is to combine the information of thermal radiation and detailed texture from the two images into one informative fused image. Recently, deep learning methods have been widely applied in this task; however, those methods usually fuse multiple extracted features with the same fusion strategy, which ignores the differences in the representation of these features, resulting in the loss of information in the fusion process. To address this issue, we propose a novel method named multi-modal feature self-adaptive transformer (MFST) to preserve more significant information about the source images. Firstly, multi-modal features are extracted from the input images by a convolutional neural network (CNN). Then, these features are fused by the focal transformer blocks that can be trained through an adaptive fusion strategy according to the characteristics of different features. Finally, the fused features and saliency information of the infrared image are considered to obtain the fused image. The proposed fusion framework is evaluated on TNO, LLVIP, and FLIR datasets with various scenes. Experimental results demonstrate that our method outperforms several state-of-the-art methods in terms of subjective and objective evaluation.
Year
DOI
Venue
2022
10.3390/rs14133233
REMOTE SENSING
Keywords
DocType
Volume
infrared image, visible image, transformer, image fusion, multi-modal feature, focal self-attention
Journal
14
Issue
ISSN
Citations 
13
2072-4292
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Xiangzeng Liu101.69
Haojie Gao200.34
Qiguang Miao335549.69
Yue Xi400.34
Yunfeng Ai500.34
Dingguo Gao600.34