Title
Video Class Agnostic Segmentation Benchmark for Autonomous Driving
Abstract
Semantic segmentation approaches are typically trained on large-scale data with a closed finite set of known classes without considering unknown objects. In certain safety-critical robotics applications, especially autonomous driving, it is important to segment all objects, including those unknown at training time. We formalize the task of video class agnostic segmentation from monocular video sequences in autonomous driving to account for unknown objects. Video class agnostic segmentation can be formulated as an open-set or a motion segmentation problem. We discuss both formulations and provide datasets and benchmark different baseline approaches for both tracks. In the motion-segmentation track we benchmark real-time joint panoptic and motion instance segmentation, and evaluate the effect of ego-flow suppression. In the open-set segmentation track we evaluate baseline methods that combine appearance, and geometry to learn prototypes per semantic class. We then compare it to a model that uses an auxiliary contrastive loss to improve the discrimination between known and unknown objects. Datasets and models are publicly released at https://msiam.github.io/vca/.
Year
DOI
Venue
2021
10.1109/CVPRW53098.2021.00317
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGITION WORKSHOPS (CVPRW 2021)
DocType
ISSN
Citations 
Conference
2160-7508
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Mennatullah Siam1367.06
alex kendall2159450.51
Martin Jagersand310010.96