Abstract | ||
---|---|---|
This paper proposes a computational emotion formation model and attempt to apply the model for speech communication. As the speech commu- nication plays an essential part of our daily life, the utilization of the speech sound will be expected to make human-robot communication smooth. The proposed model forms the state of emotion based on the prosodic components because the emotional aspects of human speech sounds are found in the prosodic change. The model is based on the agent network architecture. Also the model is taken ac- count of the motor command generation accord- ing to the state of emotion. The validity of the model is examined by the computer simulation. And from the results of the simulation, it is shown that the formation of emotion in the model is ad- equate to the speech sounds, and the motor com- mand generation is also adequate. |
Year | DOI | Venue |
---|---|---|
2002 | 10.1109/ROBOT.2002.1014401 | ICRA |
Keywords | Field | DocType |
multi-agent systems,robots,speech-based user interfaces,agent network architecture,computational emotion model,human-robot communication,prosodic component,speech sounds | Speech sounds,Speech communication,Computational intelligence,Computer science,Network architecture,Speech recognition,Multi-agent system,Natural language processing,Artificial intelligence,Robot,Neurocomputational speech processing,Acoustical engineering | Conference |
Volume | Issue | Citations |
4 | 1 | 0 |
PageRank | References | Authors |
0.34 | 3 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yoshihisa Wakamatsu | 1 | 0 | 0.34 |
Toshiyuki Kondo | 2 | 131 | 28.57 |
Koji Ito | 3 | 15 | 4.52 |