Title
A novel method for fusion of differently exposed images based on spatial distribution of intensity for ubiquitous multimedia
Abstract
Exposure fusion is an efficient way to produce a high-quality image for common Low Dynamic Range (LDR) output devices from multiple differently exposed LDR images of the same scene, which has significant potential to be applied in the ubiquitous multimedia area. Generating the fused image with high local contrast from fewer exposed images is still a challenging task. A novel method is proposed in this paper to fuse two differently exposed images based on the spatial distribution of intensity, which consists of two steps. First, the weights are computed based on the background context of the average image for producing the initial fused image. Then, we propose to enhance the initial fused image through removing the background context and efficiently refuse them. So the proposed method improves the local contrast in the dark region and keeps the color in the bright region. Experimental results and comparisons with the existing exposure fusion methods demonstrate that the proposed method has better performance and is convenient for GPU realization.
Year
DOI
Venue
2015
10.1007/s11042-013-1660-0
Multimedia Tools and Applications
Keywords
Field
DocType
Exposure fusion,Spatial distribution of intensity,Background context,Local contrast enhancement
Output device,Computer vision,Pattern recognition,Computer science,Low dynamic range,Fusion,Artificial intelligence,Fuse (electrical),Multimedia,Exposure fusion
Journal
Volume
Issue
ISSN
74
8
1380-7501
Citations 
PageRank 
References 
1
0.42
10
Authors
6
Name
Order
Citations
PageRank
Mali Yu161.53
Enmin Song217624.53
Renchao Jin3308.83
Hong Liu49618.53
Xiangyang Xu57610.40
Guangzhi Ma6304.36