Abstract | ||
---|---|---|
The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information. |
Year | Venue | Keywords |
---|---|---|
2016 | LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION | context-based,citation recommendation,CoreSC,classification,annotation |
Field | DocType | Citations |
Information retrieval,Computer science,Context based,Citation,Natural language processing,Artificial intelligence | Conference | 3 |
PageRank | References | Authors |
0.39 | 7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Daniel Duma | 1 | 28 | 3.48 |
Maria Liakata | 2 | 375 | 30.40 |
Amanda Clare | 3 | 592 | 47.37 |
James Ravenscroft | 4 | 8 | 2.24 |
Ewan Klein | 5 | 47 | 6.45 |