Title
Fusion-UDCGAN: Multifocus Image Fusion via a U-Type Densely Connected Generation Adversarial Network
Abstract
Multifocus image fusion has attracted considerable attention because it can overcome the physical limitations of optical imaging equipment and fuse multiple images with different depths of the field into one full-clear image. However, most existing deep learning-based fusion methods concentrate on the segmentation of focus-defocus regions, resulting in the loss of the details near the boundaries. To address the issue, this article proposes a novel generation adversarial network with dense connections (Fusion-UDCGAN) to fuse multifocus images. More specifically, the encoder and the decoder are first composed of dense modules with dense long connections to ensure the generated image's quality. The content and clarity loss based on the L1 norm and the novel sum-modified-Laplacian (NSML) is further embedded to provide the fused images retaining more texture features. Considering that the previous dataset-making approaches may lose the relation between the overall structure and the information near the boundaries, a new dataset, which is uniformly distributed and can simulate natural focusing boundary conditions, is constructed for model training. Subjective and objective experimental results indicate that the proposed method significantly improves the sharpness, contrast, and detail richness compared to several state-of-the-art methods.
Year
DOI
Venue
2022
10.1109/TIM.2022.3159978
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
Keywords
DocType
Volume
Dense connections, generation adversarial network (GAN), multifocus image fusion, novel sum-modified-Laplacian (NSML)
Journal
71
ISSN
Citations 
PageRank 
0018-9456
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Yuan Gao126447.87
Shiwei Ma213621.79
Jingjing Liu301.69
Xianchao Xiu433.45