Title
Triple attention network for video segmentation
Abstract
Video segmentation automatically segments a target object throughout a video and has recently achieved good progress due to the development of deep convolutional neural networks (DCNNs). However, how to simultaneously capture long-range dependencies in multiple spaces remains an important issue in video segmentation. In this paper, we propose a novel triple attention network (TriANet) that simultaneously exploits temporal, spatial, and channel context knowledge by using the self-attention mechanism to enhance the discriminant ability of feature representations. We verify our method on the Shining3D dental, DAVIS16, and DAVIS17 datasets, and the results show our method to be competitive when compared with other state-of-the-art video segmentation methods.
Year
DOI
Venue
2020
10.1016/j.neucom.2020.07.078
Neurocomputing
Keywords
DocType
Volume
Video segmentation,Computer vision,Deep learning,Convolution neural network
Journal
417
ISSN
Citations 
PageRank 
0925-2312
2
0.36
References 
Authors
0
6
Name
Order
Citations
PageRank
Yan Tian1478.52
Yujie Zhang225152.63
Di Zhou321.38
Guohua Cheng451.44
Weigang Chen592.18
Ruili Wang644650.35