Title
A KinFu based approach for robot spatial attention and view planning.
Abstract
When a user and a robot share the same physical workspace the robot may need to keep an updated 3D representation of the environment. Indeed, robot systems often need to reconstruct relevant parts of the environment where the user executes manipulation tasks. This paper proposes a spatial attention approach for a robot manipulator with an eye-in-hand Kinect range sensor. Salient regions of the environment, where user manipulation actions are more likely to have occurred, are detected by applying a clustering algorithm based on Gaussian Mixture Models applied to the user hand trajectory. A motion capture sensor is used for hand tracking. The robot attentional behavior is driven by a next-best view algorithm that computes the most promising range sensor viewpoints to observe the detected salient regions, where potential changes in the environment have occurred. The environment representation is built upon the PCL KinFu Large Scale project [1], an open source implementation of KinectFusion. KinFu has been modified to support the execution of the next-best view algorithm directly on the GPU and to properly manage voxel data. Experiments are reported to illustrate the proposed attention based approach and to show the effectiveness of GPU-based next-best view planning compared to the same algorithm executed on the CPU.
Year
DOI
Venue
2016
10.1016/j.robot.2015.09.010
Robotics and Autonomous Systems
Keywords
Field
DocType
KinectFusion,Point Cloud Library,Robot spatial attention
Robotic systems,Computer vision,Computer science,Workspace,Human–computer interaction,Artificial intelligence,View planning,Robot
Journal
Volume
ISSN
Citations 
75
0921-8890
4
PageRank 
References 
Authors
0.42
31
3
Name
Order
Citations
PageRank
Riccardo Monica1137.33
Jacopo Aleotti225929.76
Stefano Caselli331436.32