Title
Joint Depth and Density Guided Single Image De-Raining
Abstract
Single image de-raining is an important and highly challenging problem. To address this problem, some depth or density guided single-image de-raining methods have been developed with encouraging performance. However, these methods individually use the depth or the density to guide the network to conduct image de-raining. In this paper, a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">joint depth and density guided de-raining</i> (JDDGD) method is technically developed. The JDDGD starts with a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">depth-density inference network</i> (DDINet) to extract the depth and density information from an input rainy image, followed by a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">depth-density-based</i> conditional generative adversarial network (DD-CGAN) to exploit the depth and density information provided by DDINet to achieve adaptive rain streak and fog removal. To prevent the spatially-varying local artifacts, an effective <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">global-local discriminators</i> structure is introduced in the proposed DD-CGAN to globally and locally inspect the generated images. In addition, multiple loss functions including <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">multi-scale pixel loss</i> , <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">multi-scale perceptual loss</i> , and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">global-local generative adversarial loss</i> are also jointly used to train our model to achieve the best performance. Both quantitative and qualitative results show that the proposed JDDGD method achieves superior performance than previous <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">non-guided</i> , <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">density-guided</i> , and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">depth-guided de-raining</i> methods.
Year
DOI
Venue
2022
10.1109/TCSVT.2021.3121012
IEEE Transactions on Circuits and Systems for Video Technology
Keywords
DocType
Volume
Single image de-raining,depth-density inference network,depth-density-based conditional generative adversarial network,global-local discriminators
Journal
32
Issue
ISSN
Citations 
7
1051-8215
0
PageRank 
References 
Authors
0.34
28
6
Name
Order
Citations
PageRank
Lei Cai15319.97
Yuli Fu220029.90
Tao Zhu374.13
Youjun Xiang442.09
Ying Zhang500.34
Huanqiang Zeng639536.94