Title
Ranking Like Human: Global-View Matching via Reinforcement Learning for Answer Selection
Abstract
Answer Selection (AS) is of great importance for open-domain Question Answering (QA). Previous approaches typically model each pair of the question and the candidate answers independently. However, when selecting correct answers from the candidate set, the question is usually too brief to provide enough matching information for the right decision. In this paper, we propose a reinforcement learning framework that utilizes the rich overlapping information among answer candidates to help judge the correctness of each candidate. In particular, we design a policy network, whose state aggregates both the question-candidate matching information and the candidate-candidate matching information through a global-view encoder. Experiments on the benchmark of WikiQA and SelQA demonstrate that our RL framework substantially improves the ranking performance.
Year
DOI
Venue
2019
10.1109/IALP48816.2019.9037725
2019 International Conference on Asian Language Processing (IALP)
Keywords
DocType
ISSN
Answer Selection,Reinforcement Learning
Conference
2159-1962
ISBN
Citations 
PageRank 
978-1-7281-5015-4
0
0.34
References 
Authors
26
5
Name
Order
Citations
PageRank
Ruiying Geng102.70
Ruiying Geng202.70
Ping Jian366.19
Yuansheng Song400.34
Fandong Meng53119.11