Abstract | ||
---|---|---|
Video segmentation automatically segments a target object throughout a video and has recently achieved good progress due to the development of deep convolutional neural networks (DCNNs). However, how to simultaneously capture long-range dependencies in multiple spaces remains an important issue in video segmentation. In this paper, we propose a novel triple attention network (TriANet) that simultaneously exploits temporal, spatial, and channel context knowledge by using the self-attention mechanism to enhance the discriminant ability of feature representations. We verify our method on the Shining3D dental, DAVIS16, and DAVIS17 datasets, and the results show our method to be competitive when compared with other state-of-the-art video segmentation methods. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1016/j.neucom.2020.07.078 | Neurocomputing |
Keywords | DocType | Volume |
Video segmentation,Computer vision,Deep learning,Convolution neural network | Journal | 417 |
ISSN | Citations | PageRank |
0925-2312 | 2 | 0.36 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yan Tian | 1 | 47 | 8.52 |
Yujie Zhang | 2 | 251 | 52.63 |
Di Zhou | 3 | 2 | 1.38 |
Guohua Cheng | 4 | 5 | 1.44 |
Weigang Chen | 5 | 9 | 2.18 |
Ruili Wang | 6 | 446 | 50.35 |