Abstract | ||
---|---|---|
Supervised systems require human labels for training. But, are humans themselves always impartial during the annotation process? We examine this question in the context of automated assessment of human behavioral tasks. Specifically, we investigate whether human ratings themselves can be trusted at their face value when scoring video-based structured interviews, and whether such ratings can impact machine learning models that use them as training data. We present preliminary empirical evidence that indicates there are biases in such annotations, most of which are visual in nature.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3351529.3360653 | Adjunct of the 2019 International Conference on Multimodal Interaction |
Keywords | DocType | ISBN |
human bias, multimodal system, structured video interview | Conference | 978-1-4503-6937-4 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chee Wee Leong | 1 | 153 | 15.10 |
Katrina Roohr | 2 | 0 | 0.34 |
Vikram Ramanarayanan | 3 | 70 | 13.97 |
Michelle Martin-Raugh | 4 | 3 | 1.09 |
Harrison Kell | 5 | 3 | 0.75 |
Rutuja Ubale | 6 | 2 | 3.17 |
Qian Yao | 7 | 527 | 51.55 |
Zydrune Mladineo | 8 | 0 | 0.34 |
Laura McCulla | 9 | 0 | 0.34 |