Abstract | ||
---|---|---|
It has been shown that humans are sensitive to the portrayal of emotions for virtual characters. However, previous work in this area has often examined this sensitivity using extreme examples of facial or body animation. Less is known about how attuned people are at recognizing emotions as they are expressed during conversational communication. In order to determine whether body or facial motion is a better indicator for emotional expression for game characters, we conduct a perceptual experiment using synchronized full-body and facial motion-capture data. We find that people can recognize emotions from either modality alone, but combining facial and body motion is preferable in order to create more expressive characters. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1145/2522628.2522633 | MIG |
Keywords | Field | DocType |
expressive character,emotionally expressive characters,body motion,emotion capture,emotional expression,attuned people,facial motion-capture data,conversational communication,body animation,facial motion,extreme example,better indicator,animation,perception,emotion | Computer vision,Computer science,Emotional expression,Human–computer interaction,Animation,Artificial intelligence,Computer facial animation,Multimedia,Perception | Conference |
Citations | PageRank | References |
4 | 0.46 | 11 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cathy Ennis | 1 | 127 | 8.74 |
Ludovic Hoyet | 2 | 190 | 27.11 |
Arjan Egges | 3 | 417 | 32.06 |
Rachel McDonnell | 4 | 558 | 49.37 |