Title
Unsupervised Deep Multi-focus Image Fusion.
Abstract
Convolutional neural networks have recently been used for multi-focus image fusion. However, due to the lack of labeled data for supervised training of such networks, existing methods have resorted to adding Gaussian blur in focused images to simulate defocus and generate synthetic training data with ground-truth for supervised learning. Moreover, they classify pixels as focused or defocused and leverage the results to construct the fusion weight maps which then necessitates a series of post-processing steps. In this paper, we present unsupervised end-to-end learning for directly predicting the fully focused output image from multi-focus input image pairs. The proposed approach uses a novel CNN architecture trained to perform fusion without the need for ground truth fused images and exploits the image structural similarity (SSIM) to calculate the loss; a metric that is widely accepted for fused image quality evaluation. Consequently, we are able to utilize {em real} benchmark datasets, instead of simulated ones, to train our network. The model is a feed-forward, fully convolutional neural network that can process images of variable sizes during test time. Extensive evaluations on benchmark datasets show that our method outperforms existing state-of-the-art in terms of visual quality and objective evaluations.
Year
Venue
Field
2018
arXiv: Computer Vision and Pattern Recognition
Image fusion,Pattern recognition,Convolutional neural network,Computer science,Gaussian blur,Image quality,Supervised learning,Exploit,Ground truth,Artificial intelligence,Pixel
DocType
Volume
Citations 
Journal
abs/1806.07272
1
PageRank 
References 
Authors
0.35
5
4
Name
Order
Citations
PageRank
Xiang Yan16617.39
Syed Zulqarnain Gilani2233.44
Hanlin Qin311.36
A. Mian4167984.89