Abstract | ||
---|---|---|
Ontology evaluation is a critical task, even more so when the ontology is the output of an automatic system, rather than the result of a conceptualisation effort produced by a team of domain specialists and knowledge engineers. This paper provides an evaluation of the OntoLearn ontology learning system. The proposed evaluation strategy is twofold: first, we provide a detailed quantitative analysis of the ontology learning algorithms, in order to compute the accuracy of OntoLearn under different learning circumstances. Second, we automatically generate natural language descriptions of formal concept specifications, in order to facilitate per-concept qualitative analysis by domain specialists. |
Year | DOI | Venue |
---|---|---|
2004 | 10.3115/1220355.1220505 | COLING |
Keywords | Field | DocType |
conceptualisation effort,different learning circumstance,detailed quantitative analysis,ontolearn ontology,qualitative evaluation,proposed evaluation strategy,critical task,per-concept qualitative analysis,automatic system,ontology evaluation,domain specialist | Ontology (information science),Ontology-based data integration,Ontology alignment,Process ontology,Computer science,Ontology chart,Natural language processing,Artificial intelligence,Suggested Upper Merged Ontology,Upper ontology,Ontology learning | Conference |
Volume | Citations | PageRank |
C04-1 | 17 | 1.33 |
References | Authors | |
3 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
roberto navigli | 1 | 4087 | 187.88 |
paola velardi | 2 | 1553 | 163.66 |
Alessandro Cucchiarelli | 3 | 226 | 36.38 |
Francesca Neri | 4 | 65 | 7.77 |