Title
Improving Robot Success Detection Using Static Object Data
Abstract
We use static object data to improve success detection for stacking objects on and nesting objects in one another. Such actions are necessary for certain robotics tasks, e.g., clearing a dining table or packing a warehouse bin. However, using an RGB-D camera to detect success can be insufficient: same-colored objects can be difficult to differentiate, and reflective silverware cause noisy depth camera perception. We show that adding static data about the objects themselves improves the performance of an end-to-end pipeline for classifying action outcomes. Images of the objects, and language expressions describing them, encode prior geometry, shape, and size information that refine classification accuracy. We collect over 13 hours of egocentric manipulation data for training a model to reason about whether a robot successfully placed unseen objects in or on one another. The model achieves up to a 57% absolute gain over the task baseline on pairs of previously unseen objects.
Year
DOI
Venue
2019
10.1109/IROS40897.2019.8968142
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
Field
DocType
Volume
ENCODE,Computer vision,Expression (mathematics),Bin,Simulation,Absolute gain,RGB color model,Artificial intelligence,Engineering,Robot,Perception,Robotics
Journal
abs/1904.01650
ISSN
Citations 
PageRank 
2153-0858
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Rosario Scalise110.68
Jesse Thomason213914.60
Yonatan Bisk319617.54
Siddhartha Srinivasa42675167.63