Title
Functional Object Descriptors For Human Activity Modeling
Abstract
The ability to learn from human demonstration is essential for robots in human environments. The activity models that the robot builds from observation must take both the human motion and the objects involved into account. Object models designed for this purpose should reflect the role of the object in the activity - its function, or affordances. The main contribution of this paper is to represent object directly in terms of their interaction with human hands, rather than in terms of appearance. This enables the direct representation of object affordances/function, while being robust to intra-class differences in appearance. Object hypotheses are first extracted from a video sequence as tracks of associated image segments. The object hypotheses are encoded as strings, where the vocabulary corresponds to different types of interaction with human hands. The similarity between two such object descriptors can be measured using a string kernel. Experiments show these functional descriptors to capture differences and similarities in object affordances/function that are not represented by appearance.
Year
DOI
Venue
2013
10.1109/ICRA.2013.6630736
2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)
Keywords
Field
DocType
robots,human robot interaction,image segmentation,motion control,string kernel
Computer vision,Computer science,Direct representation,Object model,Image segmentation,Artificial intelligence,String kernel,Robot,Vocabulary,Affordance,Human–robot interaction
Conference
Volume
Issue
ISSN
2013
1
1050-4729
Citations 
PageRank 
References 
23
0.68
24
Authors
3
Name
Order
Citations
PageRank
Alessandro Pieropan1534.39
carl henrik ek232730.76
hedvig kjellstrom349142.24