Abstract | ||
---|---|---|
A human hand can grasp a desired number of objects at once from a pile based solely on tactile sensing. To do so, a robot needs to make a grasp in a pile, sense the number of objects in the grasp before lifting, and predict how many will remain in the grasp after lifting. It is a very challenging problem because when making the prediction, the robotic hand is still in the pile and the objects in the grasp are not observable to vision systems. Moreover, some objects in the hand before lifting may fall out the grasp when the lifting starts because they were supported by other objects in the pile instead of the fingers. A robotic hand should sense how many objects are in a grasp using its tactile sensors before lifting. This paper presents novel multi-object grasping analyzing methods to solve this problem. They include a grasp volume calculation, tactile force analysis, and a data-driven deep learning approach. The methods have been implemented on a Barrett hand and then evaluated in simulations and a real setup with a robotic system. The evaluation results conclude that once the Barrett hand grasps multiple objects in the pile, the data-driven models can make a good prediction before lifting on how many objects will remain in the hand after lifting. The root-mean-square errors are 0.74 for balls and 0.58 for cubes in simulations, and 1.06 for balls and 1.45 for cubes in the real system. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/IROS51168.2021.9636777 | 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) |
DocType | ISSN | Citations |
Conference | 2153-0858 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tianze Chen | 1 | 0 | 0.68 |
Adheesh Shenoy | 2 | 0 | 0.34 |
Anzhelika Kolinko | 3 | 0 | 0.34 |
Syed Shah | 4 | 0 | 0.34 |
Yu Sun | 5 | 208 | 35.82 |