Title
Representing Documents via Latent Keyphrase Inference.
Abstract
Many text mining approaches adopt bag-of-words or $n$-grams models to represent documents. Looking beyond just the words, fiie, the explicit surface forms, in a document can improve a computer's understanding of text. Being aware of this, researchers have proposed concept-based models that rely on a human-curated knowledge base to incorporate other related concepts in the document representation. But these methods are not desirable when applied to vertical domains (eg, literature, enterprise, etc) due to low coverage of in-domain concepts in the general knowledge base and interference from out-of-domain concepts. In this paper, we propose a data-driven model named Latent Keyphrase Inference LAKI) that represents documents with a vector of closely related domain keyphrases instead of single words or existing concepts in the knowledge base. We show that given a corpus of in-domain documents, topical content units can be learned for each domain keyphrase, which enables a computer to do smart inference to discover latent document keyphrases, going beyond just explicit mentions. Compared with the state-of-art document representation approaches, LAKI fills the gap between bag-of-words and concept-based models by using domain keyphrases as the basic representation unit. It removes dependency on a knowledge base while providing, with keyphrases, readily interpretable representations. When evaluated against 8 other methods on two text mining tasks over two corpora, LAKI outperformed all.
Year
DOI
Venue
2016
10.1145/2872427.2883088
WWW
Field
DocType
Volume
Data mining,Text mining,Computer science,Inference,Document representation,General knowledge,Artificial intelligence,Natural language processing,Knowledge base
Conference
2016
Citations 
PageRank 
References 
6
0.44
21
Authors
6
Name
Order
Citations
PageRank
Jialu Liu149760.12
Xiang Ren288560.08
Jingbo Shang31537.41
Taylor Cassidy418712.48
Clare R. Voss534429.51
Jiawei Han6430853824.48