Abstract | ||
---|---|---|
We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis. |
Year | DOI | Venue |
---|---|---|
2016 | 10.18653/v1/W16-1612 | meeting of the association for computational linguistics |
DocType | Volume | Citations |
Conference | abs/1510.02387 | 2 |
PageRank | References | Authors |
0.39 | 32 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Pranava Swaroop Madhyastha | 1 | 24 | 10.59 |
Mohit Bansal | 2 | 871 | 63.19 |
Kevin Gimpel | 3 | 1545 | 79.71 |
Karen Livescu | 4 | 1254 | 71.43 |