Title
Momentum Online LDA for Large-scale Datasets.
Abstract
Modeling large-scale document collections is a significant direction in machine learning research. Online LDA uses stochastic gradient optimization technology to speed the convergence; however the large noise of stochastic gradients leads to slower convergence and worse performance. In this paper, we employ the momentum term to smooth out the noise of stochastic gradients, and propose an extension of Online LDA, namely Momentum Online LDA (MOLDA). We collect a large-scale corpus consisting of 2M documents to evaluate our model. Experimental results indicate that MOLDA achieves faster convergence and better performance than the state-of-the-art.
Year
DOI
Venue
2014
10.3233/978-1-61499-419-0-1075
FRONTIERS IN ARTIFICIAL INTELLIGENCE AND APPLICATIONS
Field
DocType
Volume
Convergence (routing),Data mining,Computer science,Artificial intelligence,Momentum,Machine learning
Conference
263
ISSN
Citations 
PageRank 
0922-6389
1
0.36
References 
Authors
6
3
Name
Order
Citations
PageRank
Jihong OuYang19415.66
You Lu2195.52
Ximing Li34413.97