Title
Unsupervised video object segmentation with distractor-aware online adaptation
Abstract
Unsupervised video object segmentation is a crucial application in video analysis when there is no prior information about the objects. It becomes tremendously challenging when multiple objects occur and interact in a video clip. In this paper, a novel unsupervised video object segmentation approach via distractor-aware online adaptation (DOA) is proposed. DOA models spatiotemporal consistency in video sequences by capturing background dependencies from adjacent frames. Instance proposals are generated by the instance segmentation network for each frame and they are grouped by motion information as positives or hard negatives. To adopt high-quality hard negatives, the block matching algorithm is then applied to preceding frames to track the associated hard negatives. General negatives are also introduced when there are no hard negatives in the sequence. The experimental results demonstrate these two kinds of negatives are complementary. Finally, we conduct DOA using positive, negative, and hard negative masks to update the foreground and background segmentation. The proposed approach achieves state-of-the-art results on two benchmark datasets, the DAVIS 2016 and the Freiburg-Berkeley motion segmentation (FBMS)-59.
Year
DOI
Venue
2018
10.1016/j.jvcir.2020.102953
Journal of Visual Communication and Image Representation
Keywords
DocType
Volume
Unsupervised video object segmentation,Pseudo ground truth,Motion saliency,Hard negative mining,Online adaptation
Journal
74
ISSN
Citations 
PageRank 
1047-3203
0
0.34
References 
Authors
21
8
Name
Order
Citations
PageRank
Ye Wang156.52
Jongmoo Choi230221.82
Yueru Chen3193.50
Siyang Li4294.55
Qin Huang53011.60
Kaitai Zhang611.72
Ming-Sui Lee710915.00
C.-C. Jay Kuo882.24