Title
Pretraining Methods for Dialog Context Representation Learning.
Abstract
This paper examines various unsupervised pretraining objectives for learning dialog context representations. Two novel methods of pretraining dialog context encoders are proposed, and a total of four methods are examined. Each pretraining objective is fine-tuned and evaluated on a set of downstream dialog tasks using the MultiWoz dataset and strong performance improvement is observed. Further evaluation shows that our pretraining objectives result in not only better performance, but also better convergence, models that are less data hungry and have better domain generalizability.
Year
Venue
DocType
2019
Meeting of the Association for Computational Linguistics
Journal
Volume
Citations 
PageRank 
abs/1906.00414
2
0.36
References 
Authors
0
4
Name
Order
Citations
PageRank
Shikib Mehri163.50
Evgeniia Razumovskaia220.36
tiancheng zhao313610.62
Maxine Eskénazi420.36