Title
Feelvos: Fast End-To-End Embedding Learning For Video Object Segmentation
Abstract
Many of the recent successful methods for video object segmentation (VOS) are overly complicated, heavily rely on fine-tuning on the first frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fine-tuning. In order to segment a video, for each frame FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the first frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network, including the embedding, end-to-end for the multiple object segmentation task with a cross entropy loss. We achieve a new state of the art in video object segmentation without fine-tuning with a J&F measure of 71.5% on the DAVIS 2017 validation set.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00971
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Computer vision,Embedding,Pattern recognition,Computer science,Segmentation,End-to-end principle,Artificial intelligence
Journal
abs/1902.09513
ISSN
Citations 
PageRank 
1063-6919
28
0.77
References 
Authors
13
6
Name
Order
Citations
PageRank
Paul Voigtlaender11167.80
Yuning Chai226911.44
Florian Schroff375732.72
Hartwig Adam4132642.50
Bastian Leibe55191312.07
Liang-Chieh Chen6227277.92