Title
Automatic Evaluation: Using a DATE Dialogue Act Tagger for User Satisfaction and Task Completion Prediction
Abstract
The objective of the DARPA Communicator project is to support rapid, cost-effective development of multi-modal speech-enabled dialogue systems with advanced conversational capabilities. During the course of the Communicator program, we have been involved in developing methods for measuring progress towards the program goals and assessing advances in the component technologies required to achieve such goals. Our goal has been to develop a lightweight evaluation paradigm for heterogeneous systems. In this paper, we utilize the Communicator evaluation corpus from 2001 and build on previous work applying the PARADISE evaluation framework to establish a baseline for fully automatic system evaluation. We train a regression tree to predict User Satisfaction using a random 80 of the dialogues for training. The metrics (features) we use for prediction are a fully automatic Task Success Measure, Efficienc y Measures, and System Dialogue Act Behaviors extracted from the dialogue logfiles using the DATE (Dialogue Act Tagging for Evaluation) tagging scheme. The learned tree with the DATE metrics has a correlation of 0.614 ( of 0.376) with the actual user satisfaction values for the held out test set, while the learned tree without the DATE metrics has a correlation of 0.595 ( of 0.35).
Year
Venue
Keywords
2002
LREC
cost effectiveness,regression tree
Field
DocType
Citations 
Decision tree,Computer science,System evaluation,Natural language processing,Artificial intelligence,Task completion,Test set
Conference
9
PageRank 
References 
Authors
1.15
10
3
Name
Order
Citations
PageRank
Helen Wright Hastie113311.91
Rashmi Prasad2202.22
Marilyn A Walker33893418.91