Title
Fusion of multiple visual cues for visual saliency extraction from wearable camera settings with strong motion
Abstract
In this paper we are interested in the saliency of visual content from wearable cameras. The subjective saliency in wearable video is studied first due to the psycho-visual experience on this content. Then the method for objective saliency map computation with a specific contribution based on geometrical saliency is proposed. Fusion of spatial, temporal and geometric cues in an objective saliency map is realized by the multiplicative operator. Resulting objective saliency maps are evaluated against the subjective maps with promising results, highlighting interesting performance of proposed geometric saliency model.
Year
DOI
Venue
2012
10.1007/978-3-642-33885-4_44
ECCV Workshops (3)
Keywords
Field
DocType
multiple visual cue,strong motion,wearable video,wearable camera,visual saliency extraction,objective saliency map computation,visual content,subjective map,subjective saliency,wearable camera setting,proposed geometric saliency model,geometrical saliency,geometric cue,objective saliency map
Sensory cue,Computer vision,Saliency map,Pattern recognition,Salience (neuroscience),Computer science,Wearable computer,Artificial intelligence,Visual saliency,Computation
Conference
Volume
ISSN
Citations 
7585
0302-9743
15
PageRank 
References 
Authors
0.73
11
3
Name
Order
Citations
PageRank
Hugo Boujut1151.07
Jenny Benois-Pineau243554.91
Remi Megret3392.75