Abstract | ||
---|---|---|
A two-phase procedure, based on biosignal recordings, is applied in an attempt to classify the emotion valence content in human-agent interactions. In the first phase, participants are exposed to a sample of pictures with known valence values (taken from IAPS dataset) and classifiers are trained on selected features of the biosignals recorded. During the second phase, biosignals are recorded for each participant while watching video clips with interactions with a female and male ECAs. The classifiers trained in the first phase are applied and a comparison between the two interfaces is carried on based on the classifications of the emotional response from the video clips. The results obtained are promising and are discussed in the paper together with the problems encountered, and the suggestions for possible future improvement. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1007/978-3-642-03320-9_7 | Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions |
Field | DocType | Volume |
Communication,Computer science,Biosignal | Conference | 5641 |
ISSN | Citations | PageRank |
0302-9743 | 2 | 0.41 |
References | Authors | |
7 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Evgenia Hristova | 1 | 6 | 3.05 |
Maurice Grinberg | 2 | 53 | 38.54 |
Emilian Lalev | 3 | 5 | 1.22 |