Title | ||
---|---|---|
Understanding user commands by evaluating fuzzy linguistic information based on visual attention |
Abstract | ||
---|---|---|
This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms
such as “very little” are commonly included in voice commands. Therefore, a robot’s capacity to understand such information
is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on
the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate
fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for
a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based
voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average
distance to the surrounding objects. A situation of object manipulation to rearrange the user’s working space is simulated
to illustrate the system. This is demonstrated with a PA-10 robot manipulator. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1007/s10015-009-0716-8 | Artificial Life and Robotics |
Keywords | DocType | Volume |
fuzzy linguistic information · visual attention · robot control,human robot interaction,information visualization,fuzzy logic,robot control | Journal | 14 |
Issue | ISSN | Citations |
1 | 1614-7456 | 0 |
PageRank | References | Authors |
0.34 | 3 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
A. G. Jayasekara | 1 | 3 | 5.45 |
Keigo Watanabe | 2 | 591 | 144.65 |
Kiyotaka Izumi | 3 | 225 | 52.60 |