Title
An alternative method of training probabilistic LR parsers
Abstract
We discuss existing approaches to train LR parsers, which have been used for statistical resolution of structural ambiguity. These approaches are nonoptimal, in the sense that a collection of probability distributions cannot be obtained. In particular, some probability distributions expressible in terms of a context-free grammar cannot be expressed in terms of the LR parser constructed from that grammar, under the restrictions of the existing approaches to training of LR parsers. We present an alternative way of training that is provably optimal, and that allows all probability distributions expressible in the context-free grammar to be carried over to the LR parser. We also demonstrate empirically that this kind of training can be effectively applied on a large treebank.
Year
DOI
Venue
2004
10.3115/1218955.1219025
ACL
Keywords
Field
DocType
context-free grammar,training probabilistic lr parsers,structural ambiguity,statistical resolution,provably optimal,existing approach,lr parsers,alternative method,lr parser,large treebank,probability distributions expressible,probability distribution,context free grammar
LR parser,Computer science,Simple LR parser,GLR parser,Probability distribution,Artificial intelligence,Treebank,Natural language processing,Parsing,Probabilistic logic,Canonical LR parser,Machine learning
Conference
Volume
Citations 
PageRank 
P04-1
3
0.39
References 
Authors
11
2
Name
Order
Citations
PageRank
Mark-jan Nederhof138753.30
Giorgio Satta290290.85