Title
Fast target-aware learning for few-shot video object segmentation
Abstract
Few-shot video object segmentation (FSVOS) aims to segment a specific object throughout a video sequence when only the first-frame annotation is given. In this study, we develop a fast target-aware learning approach for FSVOS, where the proposed approach adapts to new video sequences from its first-frame annotation through a lightweight procedure. The proposed network comprises two models. First, the meta knowledge model learns the general semantic features for the input video image and up-samples the coarse predicted mask to the original image size. Second, the target model adapts quickly from the limited support set. Concretely, during the online inference for testing the video, we first employ fast optimization techniques to train a powerful target model by minimizing the segmentation error in the first frame and then use it to predict the subsequent frames. During the offline training, we use a bilevel-optimization strategy to mimic the full testing procedure to train the meta knowledge model across multiple video sequences. The proposed method is trained only on an individual public video object segmentation (VOS) benchmark without additional training sets and compared favorably with state-of-the-art methods on DAVIS-2017, with a $${\cal J} \& {\cal F}$$ overall score of 71.6%, and on YouTubeVOS-2018, with a $${\cal J} \& {\cal F}$$ overall score of 75.4%. Meanwhile, a high inference speed of approximately 0.13 s per frame is maintained.
Year
DOI
Venue
2022
10.1007/s11432-021-3396-7
Science China Information Sciences
Keywords
DocType
Volume
video object segmentation, few-shot, target-aware, meta knowledge, bilevel-optimization
Journal
65
Issue
ISSN
Citations 
8
1674-733X
0
PageRank 
References 
Authors
0.34
7
4
Name
Order
Citations
PageRank
Yadang Chen141.79
Chuanyan Hao241.79
Zhixin Yang311823.12
Enhua Wu4916115.33