Title
Are Humans Biased in Assessment of Video Interviews?
Abstract
Supervised systems require human labels for training. But, are humans themselves always impartial during the annotation process? We examine this question in the context of automated assessment of human behavioral tasks. Specifically, we investigate whether human ratings themselves can be trusted at their face value when scoring video-based structured interviews, and whether such ratings can impact machine learning models that use them as training data. We present preliminary empirical evidence that indicates there are biases in such annotations, most of which are visual in nature.
Year
DOI
Venue
2019
10.1145/3351529.3360653
Adjunct of the 2019 International Conference on Multimodal Interaction
Keywords
DocType
ISBN
human bias, multimodal system, structured video interview
Conference
978-1-4503-6937-4
Citations 
PageRank 
References 
0
0.34
0
Authors
9
Name
Order
Citations
PageRank
Chee Wee Leong115315.10
Katrina Roohr200.34
Vikram Ramanarayanan37013.97
Michelle Martin-Raugh431.09
Harrison Kell530.75
Rutuja Ubale623.17
Qian Yao752751.55
Zydrune Mladineo800.34
Laura McCulla900.34