Abstract | ||
---|---|---|
In this paper, we are proposing a collaborative SLAM system between a team of three heterogeneous agents: a robot, a human operator, and an augmented reality head mounted display (AR-HMD). The system allows for online editing of a map produced by a robot running SLAM. Through hand gestures, the user can edit, in real time, the robot map that is augmented on top of the physical environment. Moreover, the proposed system leverages the built-in SLAM capabilities of the AR-HMD to correct the robot's map and map areas that are not yet discovered by the robot. Our method aims to combine the unique and complementary capabilities of each of the three different agents to produce the maximum possible mapping accuracy in the minimum amount of time. The proposed system is implemented on ROS and Unity. Experiments performed demonstrate the considerably superior SLAM outputs in terms of reducing mapping time, eliminating maps post-processing, and increasing mapping accuracy. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/IROS40897.2019.8968027 | 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) |
Field | DocType | ISSN |
Computer vision,Human operator,Computer science,Gesture,Augmented reality,Optical head-mounted display,Artificial intelligence,Robot | Conference | 2153-0858 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Abbas Sidaoui | 1 | 1 | 1.73 |
Imad H. Elhajj | 2 | 310 | 53.65 |
Daniel C. Asmar | 3 | 82 | 20.11 |