Abstract | ||
---|---|---|
Social interaction is essential in improving robot human interface. Such behaviors for social interaction may include paying attention to a new sound source, moving toward it, or keeping face to face with a moving speaker. Some sound-centered behaviors may be difficult to attain, because the mixture of sounds is not well treated or auditory processing is too slow for real-time applications. Recently, Nakadai et al have developed real-time auditory and visual multiple-talker tracking technology by associating auditory and visual streams. The system is implemented on an upper-torso humanoid and the real-time talker tracking is attained with 200 msec of delay by distributed processing on four PCs connected by Gigabit Ethernet. Focus-of-attention is programmable and allows a variety of behaviors. The system demonstrates non-verbal social interaction by realizing a receptionist robot by focusing on an associated stream, while a companion robot on an auditory stream. |
Year | Venue | Keywords |
---|---|---|
2002 | IEA/AIE | companion robot,non-verbal social interaction,real-time talker tracking,auditory processing,real-time auditory,audio-visual tracking,social interaction,robot human interface,real-time application,receptionist robot,auditory stream,human interface,real time,distributed processing |
Field | DocType | ISBN |
Social relation,Computer vision,Computer science,Face-to-face,Eye tracking,Artificial intelligence,Gigabit Ethernet,Robot,Human interface device | Conference | 3-540-43781-9 |
Citations | PageRank | References |
7 | 0.91 | 10 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hiroshi G. Okuno | 1 | 2092 | 233.19 |
Kazuhiro Nakadai | 2 | 1342 | 155.91 |
Hiroaki Kitano | 3 | 3515 | 539.37 |