Title
Rushing To Judgement: How Do Laypeople Rate Caller Engagement In Thin-Slice Videos Of Human-Machine Dialog?
Abstract
We analyze the efficacy of a small crowd of naive human raters in rating engagement during human-machine dialog interactions. Each rater viewed multiple 10 second, thin-slice videos of non-native English speakers interacting with a computer assisted language learning (CALL) system and rated how engaged and disengaged those callers were while interacting with the automated agent. We observe how the crowd's ratings compared to callers' self ratings of engagement, and further study how the distribution of these rating assignments vary as a function of whether the automated system or the caller was speaking. Finally, we discuss the potential applications and pitfalls of such a crowdsourced paradigm in designing. developing and analyzing engagement-aware dialog systems.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-1205
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
Field
DocType
engagement, human-computer interaction, dialog systems, computer-assisted language learning, crowdsourcing
Dialog box,Human–machine system,Computer science,Judgement,Speech recognition
Conference
ISSN
Citations 
PageRank 
2308-457X
1
0.36
References 
Authors
7
3
Name
Order
Citations
PageRank
Vikram Ramanarayanan17013.97
Chee Wee Leong215315.10
David Suendermann-Oeft332.17