Title
Linear and Non-Linear Multimodal Fusion for Continuous Affect Estimation In-the-Wild
Abstract
Automatic continuous affect recognition from multiple modality in the wild is arguably one of the most challenging research areas in affective computing. In addressing this regression problem, the advantages of the each modality, such as audio, video and text, have been frequently explored but in an isolated way. Little attention has been paid so far to quantify the relationship within these modalities. Motivated to leverage the individual advantages of each modality, this study investigates behavioral modeling of continuous affect estimation, in multimodal fusion approaches, using Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming. The capabilities of each fusion approach are illustrated by applying it to the formulation of affect estimation generated from multiple modality using classical Support Vector Regression. The proposed fusion methods were applied in the public Sentiment Analysis in the Wild (SEWA) multimodal dataset and the experimental results indicate that employing proper fusion can deliver a significant performance improvement for all affect estimation. The results further show that the proposed systems is competitive or outperform the other state-of-the-art approaches.
Year
DOI
Venue
2018
10.1109/FG.2018.00079
2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)
Keywords
Field
DocType
linear,affect,non-linear,fusion,linear regression
Modalities,Computer science,Sentiment analysis,Behavioral modeling,Support vector machine,Genetic programming,Artificial intelligence,Affective computing,Machine learning,Linear regression,Performance improvement
Conference
ISSN
ISBN
Citations 
2326-5396
978-1-5386-2336-7
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Yona Falinie A. Gaus1403.87
Hongying Meng283269.39