Abstract | ||
---|---|---|
This article presents an integrated framework for multi-modal adaptive cognitive technical systems to guide, assist and observe human workers in complex manual assembly environments. The demand for highly flexible construction facilities obviously contradicts longer training and preparation phases of human workers.By giving context-aware building instructions over retina displays, text-to-speech commands or acoustical signals, a non-specialized industrial stand-by men in a production task can precisely be alloted to execute the next processing step without any previous knowledge. Using non-invasive gesture recognizers and object detectors the human worker can be observed in order to track the production line and initiate the subsequent step in the interaction loop.Aiming at testing and evaluating the desired human-machine interfaces and its capabilities a virtual working place together with a concrete use case is introduced. |
Year | DOI | Venue |
---|---|---|
2007 | 10.1109/ICME.2007.4285133 | 2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5 |
Keywords | Field | DocType |
text to speech,production,use case,displays,gesture recognition,human machine interface,adaptive systems,human computer interaction | Computer vision,Human–machine system,Computer science,Adaptive system,Assembly systems,Gesture,Gesture recognition,Artificial intelligence,Production line,Cognition,Technical systems | Conference |
Citations | PageRank | References |
3 | 0.51 | 3 |
Authors | ||
7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Frank Wallhoff | 1 | 214 | 28.41 |
Markus Ablaßmeier | 2 | 80 | 8.03 |
Alexander Bannat | 3 | 32 | 4.99 |
Stephan Buchta | 4 | 3 | 0.51 |
A. Rauschert | 5 | 3 | 0.51 |
Gerhard Rigoll | 6 | 2788 | 268.87 |
Mathey Wiesbeck | 7 | 21 | 3.05 |