Title
What a parser can learn from a semantic role labeler and vice versa
Abstract
In many NLP systems, there is a unidirectional flow of information in which a parser supplies input to a semantic role labeler. In this paper, we build a system that allows information to flow in both directions. We make use of semantic role predictions in choosing a single-best parse. This process relies on an averaged perceptron model to distinguish likely semantic roles from erroneous ones. Our system penalizes parses that give rise to low-scoring semantic roles. To explore the consequences of this we perform two experiments. First, we use a baseline generative model to produce n-best parses, which are then re-ordered by our semantic model. Second, we use a modified version of our semantic role labeler to predict semantic roles at parse time. The performance of this modified labeler is weaker than that of our best full SRL, because it is restricted to features that can be computed directly from the parser's packed chart. For both experiments, the resulting semantic predictions are then used to select parses. Finally, we feed the selected parses produced by each experiment to the full version of our semantic role labeler. We find that SRL performance can be improved over this baseline by selecting parses with likely semantic roles.
Year
Venue
Keywords
2010
EMNLP
semantic role prediction,low-scoring semantic role,selected parses,semantic prediction,likely semantic role,semantic model,semantic role labeler,semantic role,modified labeler,n-best parses,semantic role labeling
Field
DocType
Volume
Computer science,Natural language processing,Artificial intelligence,Semantic data model,Semantic similarity,Information flow (information theory),Information retrieval,Parsing,Perceptron,Versa,Machine learning,Semantic role labeling,Generative model
Conference
D10-1
Citations 
PageRank 
References 
2
0.38
12
Authors
3
Name
Order
Citations
PageRank
Stephen A. Boxwell1262.32
Dennis N. Mehay231.06
Chris Brew332144.44