Title
Reasoning on Grasp-Action Affordances.
Abstract
Artificial intelligence is essential to succeed in challenging activities that involve dynamic environments, such as object manipulation tasks in indoor scenes. Most of the state-of-the-art literature explores robotic grasping methods by focusing exclusively on attributes of the target object. When it comes to human perceptual learning approaches, these physical qualities are not only inferred from the object, but also from the characteristics of the surroundings. This work proposes a method that includes environmental context to reason on an object affordance to then deduce its grasping regions. This affordance is reasoned using a ranked association of visual semantic attributes harvested in a knowledge base graph representation. The framework is assessed using standard learning evaluation metrics and the zero-shot affordance prediction scenario. The resulting grasping areas are compared with unseen labelled data to asses their accuracy matching percentage. The outcome of this evaluation suggest the autonomy capabilities of the proposed method for object interaction applications in indoor environments.
Year
DOI
Venue
2019
10.1007/978-3-030-23807-0_1
Conference Towards Autonomous Robotic Systems
Field
DocType
Volume
GRASP,Ranking,Autonomy,Perceptual learning,Control engineering,Human–computer interaction,Knowledge base,Engineering,Affordance,Graph (abstract data type)
Journal
abs/1905.10610
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Paola Ardón100.68
Èric Pairet200.68
Ronald P. A. Petrick330924.24
Subramanian Ramamoorthy422945.45
Katrin S. Lohan55114.42