Abstract | ||
---|---|---|
It is well known that accuracies of statistical parsers trained over Penn treebank on test sets drawn from the same corpus tend to be overestimates of their actual parsing performance. This gives rise to the need for evaluation of parsing performance on corpora from different domains. Evaluating multiple parsers on test sets from different domains can give a detailed picture about the relative strengths/weaknesses of different parsing approaches. Such information is also necessary to guide choice of parser in applications such as machine translation where text from multiple domains needs to be handled. In this paper, we report a benchmarking study of different state-of-art parsers for English, both constituency and dependency. The constituency parser output is converted into CoNLL-style dependency trees so that parsing performance can be compared across formalisms. Specifically, we train rerankers for Berkeley and Stanford parsers to study the usefulness of reranking for handling texts from different domains. The results of our experiments lead to interesting insights about the out-of-domain performance of different English parsers. |
Year | Venue | Keywords |
---|---|---|
2012 | LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION | Statistical parsing,Parser benchmarking,Discriminative reranking,Constituency-to-dependency conversion |
Field | DocType | Citations |
Computer science,Machine translation,Artificial intelligence,Treebank,Natural language processing,Parsing,Rotation formalisms in three dimensions,Benchmarking | Conference | 2 |
PageRank | References | Authors |
0.38 | 18 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sudheer Kolachina | 1 | 32 | 3.67 |
Prasanth Kolachina | 2 | 14 | 4.69 |