Abstract | ||
---|---|---|
We develop and evaluate models for automatic vision-based voice activity detection (VAD) in multiparty human-human interactions that are aimed at complementing acoustic VAD methods. We provide evidence that this type of vision-based VAD models are susceptible to spatial bias in the dataset used for their development; the physical settings of the interaction, usually constant throughout data acquisition, determines the distribution of head poses of the participants. Our results show that when the head pose distributions are significantly different in the train and test sets, the performance of the vision-based VAD models drops significantly. This suggests that previously reported results on datasets with a fixed physical configuration may overestimate the generalization capabilities of this type of models. We also propose a number of possible remedies to the spatial bias, including data augmentation, input masking and dynamic features, and provide an in-depth analysis of the visual cues used by the developed vision-based VAD models. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICPR48806.2021.9413345 | 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) |
Keywords | DocType | ISSN |
neural networks, vision, voice activity detection, dataset bias, spatial bias | Conference | 1051-4651 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kalin Stefanov | 1 | 0 | 0.34 |
Mohammad Adiban | 2 | 2 | 1.91 |
Giampiero Salvi | 3 | 148 | 21.76 |