Abstract | ||
---|---|---|
There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours. |
Year | Venue | DocType |
---|---|---|
2015 | EMNLP | Journal |
Volume | Citations | PageRank |
abs/1504.06654 | 124 | 3.25 |
References | Authors | |
25 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Arvind Neelakantan | 1 | 408 | 17.77 |
Jeevan Shankar | 2 | 124 | 3.25 |
Passos, Alexandre | 3 | 4083 | 167.18 |
Andrew Kachites McCallumzy | 4 | 19203 | 1588.22 |