Abstract | ||
---|---|---|
Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data. |
Year | DOI | Venue |
---|---|---|
2014 | 10.3115/v1/W14-1609 | CoNLL |
DocType | Volume | Citations |
Journal | abs/1404.5367 | 85 |
PageRank | References | Authors |
3.27 | 21 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Passos, Alexandre | 1 | 4083 | 167.18 |
Vineet Kumar | 2 | 196 | 26.07 |
Andrew Kachites McCallumzy | 3 | 19203 | 1588.22 |