Title
Visual context representation using a combination of feature-driven and object-driven mechanisms
Abstract
Visual context between objects is an important cue for object position perception. How to effectively represent the visual context is a key issue to study. Some past work introduced task-driven methods for object perception, which led a large coding quantity. This paper proposes an approach that incorporates feature-driven mechanism into object-driven context representation for object locating. As an example, the paper discusses how a neuronal network encodes the visual context between feature salient regions and human eye centers with as little coding quantity as possible. A group of experiments on efficiency of visual context coding and object searching are analyzed and discussed, which show that the proposed method decreases the coding quantity and improve the object searching accuracy effectively.
Year
DOI
Venue
2008
10.1109/IJCNN.2008.4634344
IJCNN
Keywords
Field
DocType
image coding,object location,neuronal network,visual context representation,object detection,feature-driven mechanism,neural nets,object perception,object-driven mechanism,artificial neural networks,neural networks
Computer science,Coding (social sciences),Artificial intelligence,Artificial neural network,Computer vision,Object detection,Pattern recognition,Image coding,Object model,Perception,Machine learning,Form perception,Salient
Conference
ISSN
ISBN
Citations 
1098-7576 E-ISBN : 978-1-4244-1821-3
978-1-4244-1821-3
0
PageRank 
References 
Authors
0.34
6
5
Name
Order
Citations
PageRank
Jun Miao122022.17
Lijuan Duan221526.13
Laiyun Qing333724.66
Xilin Chen46291306.27
Wen Gao511374741.77