Title
MultiModal Deception Detection: Accuracy, Applicability and Generalizability
Abstract
The increasing use of Artificial Intelligence (AI) systems in face recognition and video processing in recent times creates higher stakes for their application in daily life. Increasingly, critical decisions are being made using these AI systems in application domains such as employment, finance, and crime prevention. These applications are done through the use of more abstract concepts such as emotions, trait evaluations (e.g., trustworthiness), and behavior (e.g., deception). These abstract concepts are learned by the AI system using the verbal and non-verbal cues from the human subject stimuli (e,g., facial expressions, movements, audio, text) for inference. Because the use of AI systems often happens in high stakes scenarios, it is of utmost importance that the AI system participating in the decision-making process is highly reliable and credible. In this paper, we specifically consider the feasibility of using such an AI system for deception detection. We examine if deception can be caught using multimodal aspects such as facial expressions and movements, audio cues, video cues, etc. We experiment using three different datasets with varying degrees of deception to explore the problem of deception detection. We also study state-of-the-art deception detection systems and investigate whether we can extend their algorithm into new datasets. We conclude that there is a lack of reasonable evidence that AI-based deception detection is generalizable over different scenarios of lying (lying deliberately, lying under duress, and lying through half-truths) and that in the future additional factors will need to be considered to make such a claim.
Year
DOI
Venue
2020
10.1109/TPS-ISA50397.2020.00023
2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)
Keywords
DocType
ISBN
deception detection,multi-modal data analysis,machine learning,ethics,facial expressions
Conference
978-1-7281-8544-6
Citations 
PageRank 
References 
0
0.34
0
Authors
8