Title
Deep Guided Learning For Fast Multi-Exposure Image Fusion
Abstract
We propose a fast multi-exposure image fusion (MEF) method, namely MEF-Net, for static image sequences of arbitrary spatial resolution and exposure number. We first feed a low-resolution version of the input sequence to a fully convolutional network for weight map prediction. We then jointly upsample the weight maps using a guided filter. The final image is computed by a weighted fusion. Unlike conventional MEF methods, MEF-Net is trained end-to-end by optimizing the perceptually calibrated MEF structural similarity (MEF-SSIM) index over a database of training sequences at full resolution. Across an independent set of test sequences, we find that the optimized MEF-Net achieves consistent improvement in visual quality for most sequences, and runs 10 to 1000 times faster than state-of-the-art methods. The code is made publicly available at https://github.com/makedede/MEFNet.
Year
DOI
Venue
2020
10.1109/TIP.2019.2952716
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Multi-exposure image fusion, convolutional neural networks, guided filtering, computational photography
Journal
29
Issue
ISSN
Citations 
1
1057-7149
2
PageRank 
References 
Authors
0.37
21
5
Name
Order
Citations
PageRank
Kede Ma177327.93
Zhengfang Duanmu21718.24
Hanwei Zhu362.13
Yuming Fang4124775.50
Z Wang513331630.91