Title
Linguistically-Informed Self-Attention for Semantic Role Labeling.
Abstract
The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax. In this work, we present linguistically-informed self-attention (LISA): a new neural network model that combines multi-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL. For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token. Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model. In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data. On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13% reduction in error.
Year
DOI
Venue
2018
10.18653/v1/d18-1548
EMNLP
DocType
Volume
Citations 
Conference
abs/1804.08199
13
PageRank 
References 
Authors
0.46
36
5
Name
Order
Citations
PageRank
Emma Strubell1827.10
Patrick Verga2979.11
Daniel Andor31346.73
David J. Weiss444619.11
Andrew Kachites McCallumzy5192031588.22