Abstract | ||
---|---|---|
Human workers are the most flexible, but also an expensive resource in a production system. In the context of remanufacturing, robots are a cost-effective alternative, but their programming is often not profitable and time-consuming. Programming by demonstration promises a flexible and intuitive alternative that would be feasible even for non-experts, but this first requires capturing and interpreting the human actions. This work presents a multi-sensory robot-supported platform that enables capturing bimanual manipulation actions as well as human poses, hand and gaze movements during manual disassembly tasks. As part of a study, subjects were recorded on this platform during the disassembly of electric motors in order to obtain adequate datasets for the recognition and classification of human actions. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1515/auto-2022-0006 | AT-AUTOMATISIERUNGSTECHNIK |
Keywords | DocType | Volume |
multi-sensory capturing of human actions, programming by demonstration, gaze estimation, gaze analysis, semantic video representations | Journal | 70 |
Issue | ISSN | Citations |
6 | 0178-2312 | 0 |
PageRank | References | Authors |
0.34 | 0 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Christian R. G. Dreher | 1 | 0 | 0.34 |
Manuel Zaremski | 2 | 0 | 0.34 |
Fabian Leven | 3 | 0 | 0.34 |
David Schneider | 4 | 0 | 0.34 |
Alina Roitberg | 5 | 6 | 6.20 |
Rainer Stiefelhagen | 6 | 3512 | 274.86 |
Michael Heizmann | 7 | 0 | 0.34 |
Barbara Deml | 8 | 0 | 0.34 |
tamim asfour | 9 | 1889 | 151.86 |