Abstract | ||
---|---|---|
When a mobile robot provides various services in daily life spaces, it is important for the robot to speak naturally to people. However, non-humanoid robots have difficulty using nonverbal information such as gestures and body orientation like humans do, and especially in environments with multiple people, it is difficult to know who is speaking to whom, causing problems such as ignoring the other person or making the surrounding people feel lost or awkward. Here, this paper proposes a method for a robot that does not have a human-like body to present the area where the robot interacts using projection, and a method for clarifying who is speaking to whom. In the experiment, when the robot speaks to some people in an environment with multiple people and guides them, we verified whether the information is accurately conveyed to the people who speak to the robot, compared to the conventional method in which the robot turns toward the other person. As a result, all participants were correctly guided by the proposed method, whereas some participants were incorrectly guided by the method using the robot's orientation. The questionnaire evaluation confirmed that the proposed method has a good tendency in terms of comfort. |
Year | DOI | Venue |
---|---|---|
2022 | 10.23919/SICE56594.2022.9905817 | 2022 61st Annual Conference of the Society of Instrument and Control Engineers (SICE) |
Keywords | DocType | ISBN |
Augmented reality,Guide robot,Human-robot interaction,Interaction area,Projection robot | Conference | 978-1-6654-9224-9 |
Citations | PageRank | References |
0 | 0.34 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Suguru Sone | 1 | 0 | 0.34 |
Tetsushi Ikeda | 2 | 0 | 1.69 |
Satoshi Iwaki | 3 | 0 | 0.68 |