Title
Inference and evaluation of the multinomial mixture model for text clustering
Abstract
In this article, we investigate the use of a probabilistic model for unsupervised clustering in text collections. Unsupervised clustering has become a basic module for many intelligent text processing applications, such as information retrieval, text classification or information extraction. Recent proposals have been made of probabilistic clustering models, which build ''soft'' theme-document associations. These models allow to compute, for each document, a probability vector whose values can be interpreted as the strength of the association between documents and clusters. As such, these vectors can also serve to project texts into a lower-dimensional ''semantic'' space. These models however pose non-trivial estimation problems, which are aggravated by the very high dimensionality of the parameter space. The model considered in this paper consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme. We propose a systematic evaluation framework to contrast various estimation procedures for this model. Starting with the expectation-maximization (EM) algorithm as the basic tool for inference, we discuss the importance of initialization and the influence of other features, such as the smoothing strategy or the size of the vocabulary, thereby illustrating the difficulties incurred by the high dimensionality of the parameter space. We empirically show that, in the case of text processing, these difficulties can be alleviated by introducing the vocabulary incrementally, due to the specific profile of the word count distributions. Using the fact that the model parameters can be analytically integrated out, we finally show that Gibbs sampling on the theme configurations is tractable and compares favorably to the basic EM approach.
Year
DOI
Venue
2006
10.1016/j.ipm.2006.11.001
Inf. Process. Manage.
Keywords
DocType
Volume
em algorithm,parameter space,information retrieval,information extraction,probabilistic model,content analysis,text clustering,naive bayes,gibbs sampling,supervised learning,heuristic algorithm,expectation maximization,latent variable,multinomial distribution,mixture model
Journal
43
Issue
ISSN
Citations 
5
Information Processing and Management
19
PageRank 
References 
Authors
1.00
18
3
Name
Order
Citations
PageRank
Loïs Rigouste1262.00
O. Cappe22112207.95
François Yvon3941102.51