Abstract | ||
---|---|---|
In the last few decades, selective visual attention has been extensively studied for its promising contributions to computer vision applications. Many different models have been proposed to compute visual saliency, which can be coarsely formulated as computational or psychophysical. Most existing methods are based on bottom-up mechanism, an automatic human behavior to guide gaze allocation. And low level features such as color, intensity and orientation are commonly adopted to compute saliency map. In this work, we propose a saliency computation method that integrates high-level information of object with low-level features. The result map is more suitable for most top-down tasks in the field of mobile robot requiring object information. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1109/ICARCV.2014.7064488 | ICARCV |
Keywords | Field | DocType |
visual saliency,top-down tasks,features,color feature,computer vision applications,saliency map,high level cues,automatic human behavior,gaze tracking,salient region detection,intensity feature,saliency computation method,feature extraction,selective visual attention,visual attention,hog,orientation feature,saliency,computer vision,mobile robot,low-level features,gaze allocation,high-level object information,image colour analysis,computational modeling,detectors,visualization,image segmentation | Object detection,Computer vision,Kadir–Brady saliency detector,Feature detection (computer vision),Pattern recognition,Feature (computer vision),Computer science,Salience (neuroscience),Feature extraction,Image segmentation,Artificial intelligence,Mobile robot | Conference |
ISSN | Citations | PageRank |
2474-2953 | 1 | 0.35 |
References | Authors | |
24 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhong Liu | 1 | 148 | 26.70 |
Chen Weihai | 2 | 190 | 38.21 |
Xingming Wu | 3 | 43 | 13.16 |