Title
Simulating Bandit Learning from User Feedback for Extractive Question Answering
Abstract
We study learning from user feedback for extractive question answering by simulating feedback using supervised data. We cast the problem as contextual bandit learning, and analyze the characteristics of several learning scenarios with focus on reducing data annotation. We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers, and that one can use existing datasets to deploy systems in new domains without any annotation, but instead improving the system on-the-fly via user feedback.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.355
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Ge Gao1716.12
Eunsol Choi200.34
Yoav Artzi348326.99