Abstract | ||
---|---|---|
Deception (the action of deliberately cause someone to believe something that is not true) can have many different repercussions in daily life. However, deception detection is an inherently complex task for humans. Due to this, not only there is uncertainty on which features should be used as cues for automatic deception detection, but labeled data is scarce. In this paper, we explore typical features that can be extracted from videos for affective computing and study their performance for deception detection in videos. Additionally, we perform a study of different multimodal fusion methods meant to improve the results obtained by using the different sets of extracted features separately, including a novel set of methods based on boosting. For this study, high-level features are extracted with open automatic tools for the visual, acoustical and textual modalities, respectively. Experiments are conducted using a real-life trial dataset for deception detection, as well as a novel Mexican deception detection dataset using Spanish as the spoken language. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPRW.2019.00198 | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
Field | DocType | ISSN |
Computer vision,Deception,Computer science,Human–computer interaction,Artificial intelligence | Conference | 2160-7508 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rodrigo Rill-García | 1 | 0 | 0.68 |
Hugo Jair Escalante | 2 | 939 | 73.89 |
Luis Villaseñor-Pineda | 3 | 403 | 53.74 |
Verónica Reyes-Meza | 4 | 0 | 2.37 |