Title | ||
---|---|---|
Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning. |
Abstract | ||
---|---|---|
Distant supervision relation extraction is a promising approach to find new relation instances from large text corpora. Most previous works employ the top 1 strategy, i.e., predicting the relation of a sentence with the highest confidence score, which is not always the optimal solution. To improve distant supervision relation extraction, this work applies the best from top k strategy to explore the possibility of relations with lower confidence scores. We approach the best from top k strategy using a deep reinforcement learning framework, where the model learns to select the optimal relation among the top k candidates for better predictions. Specifically, we employ a deep Q-network, trained to optimize a reward function that reflects the extraction performance under distant supervision. The experiments on three public datasets - of news articles, Wikipedia and biomedical papers - demonstrate that the proposed strategy improves the performance of traditional state-of-the-art relation extractors significantly. We achieve an improvement of 5.13% in average F(_1)-score over four competitive baselines. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/978-3-030-16142-2_16 | pacific-asia conference on knowledge discovery and data mining |
Field | DocType | Citations |
Confidence score,Computer science,Text corpus,Artificial intelligence,Sentence,Machine learning,Reinforcement learning,Relationship extraction | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yaocheng Gui | 1 | 4 | 1.75 |
Qian Liu | 2 | 4 | 1.74 |
Tingming Lu | 3 | 0 | 1.01 |
Zhiqiang Gao | 4 | 349 | 39.84 |