Title
An overview of semantic search evaluation initiatives
Abstract
Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the underlying search mechanisms, results management and presentation, and style of input. Each approach impacts upon the quality of the information retrieved and the user's experience of the search process. However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream IR evaluation with a far less unified approach to the design of evaluation activities. This has led to slow progress and low interest when compared to other established evaluation series, such as TREC for IR or OAEI for Ontology Matching. In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems. Through a discussion of these, we identify their weaknesses and highlight the future need for a more comprehensive evaluation framework that addresses current limitations.
Year
DOI
Venue
2015
10.1016/j.websem.2014.10.001
Web Semantics: Science, Services and Agents on the World Wide Web
Keywords
Field
DocType
Semantic search,Usability,Evaluation,Benchmarking,Performance,Information retrieval
Ontology alignment,Data mining,IR evaluation,World Wide Web,Information retrieval,Semantic search,Computer science,Usability,Semantic Web,Social Semantic Web,Benchmarking
Journal
Volume
Issue
ISSN
30
C
1570-8268
Citations 
PageRank 
References 
2
0.37
137
Authors
4
Search Limit
100137
Name
Order
Citations
PageRank
Khadija M. Elbedweihy120.37
Stuart N. Wrigley218120.56
Paul Clough31308111.91
Fabio Ciravegna41635140.18