Title
Best of Both Worlds: See and Understand Clearly in the Dark
Abstract
ABSTRACTRecently, with the development of intelligent technology, the perception of low-light scenes has been gaining widespread attention. However, existing techniques usually focus on only one task (e.g., enhancement) and lose sight of the others (e.g., detection), making it difficult to perform all of them well at the same time. To overcome this limitation, we propose a new method that can handle visual quality enhancement and semantic-related tasks (e.g., detection, segmentation) simultaneously in a unified framework. Specifically, we build a cascaded architecture to meet the task requirements. To better enhance the entanglement in both tasks and achieve mutual guidance, we develop a new contrastive-alternative learning strategy for learning the model parameters, to largely improve the representational capacity of the cascaded architecture. Notably, the contrastive learning mechanism establishes the communication between two objective tasks in essence, which actually extends the capability of contrastive learning to some extent. Finally, extensive experiments are performed to fully validate the advantages of our method over other state-of-the-art works in enhancement, detection, and segmentation. A series of analytical evaluations are also conducted to reveal our effectiveness. The code is available at https://github.com/k914/contrastive-alternative-learning.
Year
DOI
Venue
2022
10.1145/3503161.3548259
International Multimedia Conference
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Xinwei Xue100.34
Jia He200.34
Long Ma34815.10
Yi Wang400.34
Xin Fan5776104.55
Risheng Liu683359.64