Abstract | ||
---|---|---|
Supervised semantic role labeling (SRL) systems are generally claimed to have accuracies in the range of 80% and higher (Erk and Padó, 2006). These numbers, though, are the result of highly-restricted evaluations, i.e., typically evaluating on hand-picked lemmas for which training data is available. In this paper we consider performance of such systems when we evaluate at the document level rather than on the lemma level. While it is well-known that coverage gaps exist in the resources available for training supervised SRL systems, what we have been lacking until now is an understanding of the precise nature of this coverage problem and its impact on the performance of SRL systems. We present a typology of five different types of coverage gaps in FrameNet. We then analyze the impact of the coverage gaps on performance of a supervised semantic role labeling system on full texts, showing an average oracle upper bound of 46.8%. |
Year | Venue | Keywords |
---|---|---|
2010 | COLING (Posters) | hand-picked lemma,full text,training data,coverage problem,supervised semantic role,coverage gap,lemma level,srl system,framenet-style semantic parsing,different type,document level |
Field | DocType | Volume |
Training set,Computer science,Oracle,Natural language processing,Artificial intelligence,Parsing,Semantic role labeling,Lemma (mathematics),FrameNet | Conference | C10-2 |
Citations | PageRank | References |
12 | 0.66 | 20 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexis Palmer | 1 | 73 | 13.59 |
Caroline Sporleder | 2 | 453 | 31.84 |