Title
Multimodal Assessment of Oral Presentations using HMMs
Abstract
Audience perceptions of public speakers' performance change over time. Some speakers start strong but quickly transition to mundane delivery, while others may have a few impactful and engaging portions of their talk preceded and followed by more pedestrian delivery. In this work, we model the time-varying qualities of a presentation as perceived by the audience and use these models both to provide diagnostic information to presenters and to improve the quality of automated performance assessments. In particular, we use HMMs to model various dimensions of perceived quality and how they change over time and use the sequence of quality states to improve feedback and predictions. We evaluate this approach on a corpus of 74 presentations given in a controlled environment. Multimodal features-spanning acoustic qualities, speech disfluencies, and nonverbal behavior were derived both automatically and manually using crowdsourcing. Ground truth on audience perceptions was obtained using judge ratings on both overall presentations (aggregate) and portions of presentations segmented by topic. We distilled the overall presentation quality into states representing the presenter's gaze, audio, gesture, audience interaction, and proxemic behaviors. We demonstrate that an HMM of state-based representation of presentations improves the performance assessments.
Year
DOI
Venue
2020
10.1145/3382507.3418888
ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION Virtual Event Netherlands October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7581-8
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Everlyne Kimani134.46
Prasanth Murali264.47
Ameneh Shamekhi312.04
Jeremy N. Bailenson411913.36
Sumanth Munikoti500.34
Timothy Bickmore62581318.35