Title
Using Word Embedding to Evaluate the Coherence of Topics from Twitter Data.
Abstract
Scholars often seek to understand topics discussed on Twitter using topic modelling approaches. Several coherence metrics have been proposed for evaluating the coherence of the topics generated by these approaches, including the pre-calculated Pointwise Mutual Information (PMI) of word pairs and the Latent Semantic Analysis (LSA) word representation vectors. As Twitter data contains abbreviations and a number of peculiarities (e.g. hashtags), it can be challenging to train effective PMI data or LSA word representation. Recently, Word Embedding (WE) has emerged as a particularly effective approach for capturing the similarity among words. Hence, in this paper, we propose new Word Embedding-based topic coherence metrics. To determine the usefulness of these new metrics, we compare them with the previous PMI/LSA-based metrics. We also conduct a large-scale crowdsourced user study to determine whether the new Word Embedding-based metrics better align with human preferences. Using two Twitter datasets, our results show that the WE-based metrics can capture the coherence of topics in tweets more robustly and efficiently than the PMI/LSA-based ones.
Year
DOI
Venue
2016
10.1145/2911451.2914729
SIGIR
Field
DocType
Citations 
Data mining,Word representation,Information retrieval,Computer science,Coherence (physics),Natural language processing,Artificial intelligence,Word embedding,Topic model,Latent semantic analysis,Pointwise mutual information
Conference
10
PageRank 
References 
Authors
0.54
12
4
Name
Order
Citations
PageRank
Anjie Fang1355.93
Craig Macdonald22588178.50
Iadh Ounis33438234.59
Philip Habel4342.88