Abstract | ||
---|---|---|
While many traditional studies on semantic relatedness utilize the lexical databases, such as WordNet or Wikitionary, the recent word embedding learning approaches demonstrate their abilities to capture syntactic and semantic information, and outperform the lexicon-based methods. However, word senses are not disambiguated in the training phase of both Word2Vec and GloVe, two famous word embedding algorithms, and the path length between any two senses of words in lexical databases cannot reflect their true semantic relatedness. In this paper, a novel approach that linearly combines Word2Vec and GloVe with the lexical database WordNet is proposed for measuring semantic relatedness. The experiments show that the simple method outperforms the state-of-the-art model SensEmbed.
|
Year | DOI | Venue |
---|---|---|
2016 | 10.1145/2872518.2889395 | WWW '16: 25th International World Wide Web Conference
Montréal
Québec
Canada
April, 2016 |
Field | DocType | ISBN |
Semantic similarity,Computer science,Lexical database,Semantic information,Lexicon,Artificial intelligence,Natural language processing,Word2vec,Word embedding,WordNet,Syntax | Conference | 978-1-4503-4144-8 |
Citations | PageRank | References |
3 | 0.42 | 4 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yang-Yin Lee | 1 | 20 | 3.70 |
Ke Hao | 2 | 22 | 4.08 |
Hen-Hsen Huang | 3 | 63 | 37.14 |
Hsin-hsi Chen | 4 | 2267 | 233.93 |