Title
Data Fusion For Real-Time Multimodal Emotion Recognition Through Webcams And Microphones In E-Learning
Abstract
This article describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real-time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on the intended emotions they show and which is also useful to increase learners' awareness of their own behavior. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons' behavior was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.
Year
DOI
Venue
2016
10.1080/10447318.2016.1159799
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION
Field
DocType
Volume
E learning,Computer science,Emotion recognition,Speech recognition,Sensor fusion,Human–computer interaction,Emotion detection,Software development,Microphone
Journal
32
Issue
ISSN
Citations 
5
1044-7318
5
PageRank 
References 
Authors
0.42
23
3
Name
Order
Citations
PageRank
Kiavash Bahreini1537.74
Rob Nadolski224522.09
Wim Westera317419.80