Abstract | ||
---|---|---|
This paper demonstrates multimodal fusion of emotion sensory data in realistic scenarios of relatively long human-machine interactions. Fusion, combining voice and facial expressions, has been enhanced with semantic information retrieved from Internet social networks, resulting in more accurate determination of the conveyed emotion. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1007/978-3-642-25330-0_30 | MICAI (2) |
Keywords | DocType | Volume |
long human-machine interaction,multimodal emotion detection,accurate determination,multimodal fusion,facial expression,conveyed emotion,realistic scenario,semantic information,emotion sensory data,internet social network | Conference | 7095 |
ISSN | Citations | PageRank |
0302-9743 | 3 | 0.43 |
References | Authors | |
7 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Diego R. Cueva | 1 | 3 | 1.10 |
Rafael A. M. Gonçalves | 2 | 3 | 1.10 |
Fabio G. Cozman | 3 | 1200 | 172.21 |
Marcos R. Pereira-Barretto | 4 | 4 | 1.12 |