Title
Large-scale learning of word relatedness with constraints
Abstract
Prior work on computing semantic relatedness of words focused on representing their meaning in isolation, effectively disregarding inter-word affinities. We propose a large-scale data mining approach to learning word-word relatedness, where known pairs of related words impose constraints on the learning process. We learn for each word a low-dimensional representation, which strives to maximize the likelihood of a word given the contexts in which it appears. Our method, called CLEAR, is shown to significantly outperform previously published approaches. The proposed method is based on first principles, and is generic enough to exploit diverse types of text corpora, while having the flexibility to impose constraints on the derived word similarities. We also make publicly available a new labeled dataset for evaluating word relatedness algorithms, which we believe to be the largest such dataset to date.
Year
DOI
Venue
2012
10.1145/2339530.2339751
KDD
Keywords
Field
DocType
inter-word affinity,large-scale learning,related word,word-word relatedness,diverse type,large-scale data mining approach,semantic relatedness,word relatedness algorithm,low-dimensional representation,word similarity,semantic similarity,computational semantics,first principle,data mining
Semantic similarity,Data mining,Computer science,Text corpus,Exploit,Artificial intelligence,Natural language processing,Machine learning
Conference
Citations 
PageRank 
References 
55
1.44
15
Authors
4
Name
Order
Citations
PageRank
Guy Halawi1551.44
Gideon Dror21761104.44
Evgeniy Gabrilovich34573224.48
Yehuda Koren49090484.08