Abstract | ||
---|---|---|
While data-driven methods for image saliency detection has become more and more mature, video saliency detection, which has additional inter-frame motion and temporal information, still needs further exploration. Different from images, video data, in addition to rich semantic information, also contains a large number of contextual information and motion features. For different scenes, video saliency also has different tendencies. In the movie scene, the face has the strongest visual stimulus to the viewer. In view of the specific movie scene, we propose an efficient and novel video attention prediction model with auxiliary facial saliency (AFSnet) to predict human eye locations in movie scene. The proposed model takes FCN as the basic structure, and improves the prediction effect by adaptively combining facial saliency hints. We give qualitative and quantitative experiments to prove the validity of the model. |
Year | Venue | Field |
---|---|---|
2018 | BICS | Human eye,Computer vision,Contextual information,Computer science,Salience (neuroscience),Semantic information,Artificial intelligence |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
21 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ziqi Zhou | 1 | 0 | 0.68 |
Meijun Sun | 2 | 74 | 11.77 |
Jinchang Ren | 3 | 1144 | 88.54 |
Zheng Wang | 4 | 84 | 8.26 |