Title
Statistical decision-tree models for parsing
Abstract
Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text-processing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too complex and detailed to develop manually for most interesting domains; (2) parsing models must rely heavily on lexical and contextual information to analyze sentences accurately; and (3) existing n-gram modeling techniques are inadequate for parsing models. In experiments comparing SPATTER with IBM's computer manuals parser, SPATTER significantly outperforms the grammar-based parser. Evaluating SPATTER against the Penn Treebank Wall Street Journal corpus using the PARSEVAL measures, SPATTER achieves 86% precision, 86% recall, and 1.3 crossing brackets per sentence for sentences of 40 words or less, and 91% precision, 90% recall, and 0.5 crossing brackets for sentences between 10 and 20 words in length.
Year
DOI
Venue
1995
10.3115/981658.981695
meeting of the association for computational linguistics
Keywords
DocType
Volume
wall street journal,grammar-based parser,computer manuals parser,accuracy rate,parsing model,complete parse,parseval measure,journal corpus,statistical decision-tree model,penn treebank wall street,statistical parser,decision tree,decision tree learning,natural language
Conference
cmp-lg/9504030
ISSN
Citations 
PageRank 
Proceedings of the 33rd Annual Meeting of the ACL
279
108.09
References 
Authors
4
1
Search Limit
100279
Name
Order
Citations
PageRank
David M. Magerman1726512.15