Abstract | ||
---|---|---|
Emotive virtual humans (VHs) are important for affective interactions with embodied conversation agents [1]. However, emotive VHs require significant resources and time. As an example, the VHs in movies and video games require teams of animators and months of work. VHs can also be imbued with emotion using appraisal theory methods that use psychology based models to generate emotions by using the VH's goals and beliefs to evaluate external events. These external events require manual tagging or natural language understanding [2]. As an alternative approach, we propose tagging VH responses with emotions using textual affect sensing methods. The method developed by Neviarouskaya et al. [3] uses syntactic parses and a database of words and associated emotion intensities.We use this database, and because these emotions are associated with specific words, we can combine the emotions with audio timing information to generate lip-synched facial expressions. Our approach, AutoEmotion, allows us to automatically add basic emotions to VHs without the need for manual animation or tagging or natural language understanding. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1007/978-3-642-04380-2_57 | IVA |
Keywords | Field | DocType |
emotive virtual human,associated emotion intensity,external event,emotive virtual humans,basic emotion,vh response,automated generation,alternative approach,manual tagging,natural language understanding,manual animation,emotive vhs,emotions,facial expression,virtual human,virtual reality | Conversation,Computer science,Emotion classification,Embodied cognition,Facial expression,Natural language understanding,Natural language processing,Animation,Artificial intelligence,Emotive,Appraisal theory,Multimedia | Conference |
Volume | ISSN | Citations |
5773 | 0302-9743 | 1 |
PageRank | References | Authors |
0.35 | 5 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Joon Hao Chuah | 1 | 24 | 3.34 |
Brent Rossen | 2 | 70 | 6.86 |
Benjamin Lok | 3 | 88 | 15.06 |