Title
What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation
Abstract
Accurate automatic evaluation metrics for open-domain dialogs are in high demand. Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect. In this work, we propose to use information that can be automatically extracted from the next user utterance, such as its sentiment or whether the user explicitly ends the conversation, as a proxy to measure the quality of the previous system response. This allows us to train on a massive set of dialogs with weak supervision, without requiring manual system turn quality annotations. Experiments show that our model is comparable to models trained on human annotated data. Furthermore, our model generalizes across both spoken and written opendomain dialog corpora collected from real and paid users.
Year
DOI
Venue
2022
10.18653/v1/2022.findings-acl.331
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022)
DocType
Volume
Citations 
Conference
Findings of the Association for Computational Linguistics: ACL 2022
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Sarik Ghazarian102.03
Behnam Hedayatnia200.34
Alexandros Papangelis39318.01
Yang Liu494570.67
Dilek Hakkani-Tür5102485.05