Title
Neural machine translating from natural language to SPARQL
Abstract
SPARQL is a highly powerful query language for an ever-growing number of resources and knowledge graphs represented in the Resource Description Framework (RDF) data format. Using it requires a certain familiarity with the entities in the domain to be queried as well as expertise in the language’s syntax and semantics, none of which average human web users can be assumed to possess. To overcome this limitation, automatically translating natural language questions to SPARQL queries has been a vibrant field of research. However, to this date, the vast success of deep learning methods has not yet been fully propagated to this research problem. This paper contributes to filling this gap by evaluating the utilization of eight different Neural Machine Translation (NMT) models for the task of translating from natural language to the structured query language SPARQL. While highlighting the importance of high-quantity and high-quality datasets, the results show a dominance of a Convolutional Neural Network (CNN)-based architecture with a Bilingual Evaluation Understudy (BLEU) score of up to 98 and accuracy of up to 94%.
Year
DOI
Venue
2021
10.1016/j.future.2020.12.013
Future Generation Computer Systems
Keywords
DocType
Volume
SPARQL,Neural Machine Translation,Natural language queries,Learning structured knowledge
Journal
117
ISSN
Citations 
PageRank 
0167-739X
1
0.35
References 
Authors
0
3
Name
Order
Citations
PageRank
Xiaoyu Yin110.35
Dagmar Gromann255.60
Sebastian Rudolph321.39