Title
Methods and challenges for creating an emotional audio-visual database
Abstract
Emotion has a very important role in human communication and can be expressed either verbally through speech (e.g. pitch, intonation, prosody etc), or by facial expressions, gestures etc. Most of the contemporary human-computer interaction are deficient in interpreting these information and hence suffers from lack of emotional intelligence. In other words, these systems are unable to identify human's emotional state and hence is not able to react properly. To overcome these inabilities, machines are required to be trained using annotated emotional data samples. Motivated from this fact, here we have attempted to collect and create an audio-visual emotional corpus. Audio-visual signals of multiple subjects were recorded when they were asked to watch either presentation (having background music) or emotional video clips. Post recording subjects were asked to express how they felt, and to read out sentences that appeared on the screen. Self annotation from the subject itself, as well as annotation from others have also been carried out to annotate the recorded data.
Year
DOI
Venue
2017
10.1109/ICSDA.2017.8384466
2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)
Keywords
Field
DocType
emotion database,speech emotion,visual expression,acted,spontaneous,induced
Prosody,Annotation,Gesture,Psychology,Facial expression,Artificial intelligence,Natural language processing,Emotional intelligence,Human communication,CLIPS
Conference
ISBN
Citations 
PageRank 
978-1-5386-3334-2
0
0.34
References 
Authors
4
3
Name
Order
Citations
PageRank
Meghna Pandharipande100.34
Rupayan Chakraborty2158.21
Sunil Kumar Kopparapu34225.18