Title
Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge
Abstract
While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA.
Year
DOI
Venue
2016
10.1162/tacl_a_00078
Transactions of the Association for Computational Linguistics
Field
DocType
Volume
Latent Dirichlet allocation,Domain knowledge,Hyperparameter,Computer science,Corex,Correlation,Artificial intelligence,Generative grammar,Topic model,Machine learning,Generative model
Journal
5
Issue
ISSN
Citations 
1
Transactions of the Association for Computational Linguistics (TACL), Vol. 5, 2017
3
PageRank 
References 
Authors
0.51
0
4
Name
Order
Citations
PageRank
Ryan J. Gallagher130.51
Kyle Reing251.91
David Kale322013.58
Greg Ver Steeg424332.99