Title
Affordance Learning For End-To-End Visuomotor Robot Control
Abstract
Training end-to-end deep robot policies requires a lot of domain-, task-, and hardware-specific data, which is often costly to provide. In this work, we propose to tackle this issue by employing a deep neural network with a modular architecture, consisting of separate perception, policy, and trajectory parts. Each part of the system is trained fully on synthetic data or in simulation. The data is exchanged between parts of the system as low-dimensional latent representations of affordances and trajectories. The performance is then evaluated in a zero-shot transfer scenario using Franka Panda robot arm. Results demonstrate that a low-dimensional representation of scene affordances extracted from an RGB image is sufficient to successfully train manipulator policies. We also introduce a method for affordance dataset generation, which is easily generalizable to new tasks, objects and environments, and requires no manual pixel labeling.
Year
DOI
Venue
2019
10.1109/IROS40897.2019.8968596
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
Field
DocType
Volume
Robot control,Robotic arm,Control engineering,Synthetic data,Human–computer interaction,Pixel,Engineering,Robot,Artificial neural network,Affordance,Trajectory
Journal
abs/1903.04053
ISSN
Citations 
PageRank 
2153-0858
2
0.38
References 
Authors
13
4
Name
Order
Citations
PageRank
Aleksi Hämäläinen120.38
Karol Arndt221.40
ali ghadirzadeh3134.32
V. Kyrki465261.79