Abstract | ||
---|---|---|
The capability of perceiving and expressing emotions through different modalities is a key issue for the enhancement of human-computer interaction. In this paper we present a novel architecture for the development of intelligent multimodal affective interfaces. It is based on the integration of Sentic Computing, a new opinion mining and sentiment analysis paradigm based on AI and Semantic Web techniques, with a facial emotional classifier and Maxine, a powerful multimodal animation engine for managing virtual agents and 3D scenarios. One of the main distinguishing features of the system is that it does not simply perform emotional classification in terms of a set of discrete emotional labels but it operates in a continuous 2D emotional space, enabling the integration of the different affective extraction modules in a simple and scalable way. |
Year | Venue | Keywords |
---|---|---|
2010 | COST 2102 Training School | sentic avatar,multimodal affective conversational agent,emotional space,different modality,semantic web technique,common sense,different affective extraction module,sentic computing,facial emotional classifier,discrete emotional label,powerful multimodal animation engine,emotional classification,intelligent multimodal affective interface,nlp,sentiment analysis,social environment,opinion mining,semantic web,ai,human computer interaction,biometric identification,conversational agent |
Field | DocType | Volume |
Modalities,Sentiment analysis,Semantic Web,Psychology,Human–computer interaction,Artificial intelligence,Animation,Dialog system,Classifier (linguistics),Avatar,Scalability | Conference | 6456 |
ISSN | Citations | PageRank |
0302-9743 | 4 | 0.46 |
References | Authors | |
25 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Erik Cambria | 1 | 3873 | 183.70 |
Isabelle Hupont | 2 | 76 | 12.28 |
Amir Hussain | 3 | 705 | 29.16 |
Eva Cerezo | 4 | 328 | 43.40 |
Sandra Baldassarri | 5 | 188 | 34.08 |