Abstract | ||
---|---|---|
In the area of information retrieval, the dimension of document vectors plays an important role. We may need to find a few words or concepts, which characterize the document based on its contents, to overcome the problem of the "curse of dimensionality", which makes indexing of high-dimensional data problematic. To do so, we earlier proposed a Wordnet and Wordnet+LSI (Latent Semantic Indexing) based model for dimension reduction. While LSI concepts contain identifiable terms in top-level concepts, we show in this paper that semi-discrete decomposition provides mostly smaller list of terms and we need to cope only with ternary weights. With this size of term list, the identification of document's topic becomes much more feasible. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1109/ISDA.2008.62 | ISDA (1) |
Keywords | Field | DocType |
lsi concept,high-dimensional data,latent semantic indexing,term list,dimension reduction,important role,information retrieval,smaller list,semi-discrete decomposition,identifiable term,document vector,topic identification,high dimensional data,covariance matrix,vector space model,sparse matrices,curse of dimensionality,matrix decomposition,pattern recognition,wordnet,indexing,ontologies,vectors | Ontology (information science),Dimensionality reduction,Pattern recognition,Information retrieval,Document clustering,Computer science,Matrix decomposition,Search engine indexing,Curse of dimensionality,Artificial intelligence,Vector space model,WordNet | Conference |
Citations | PageRank | References |
1 | 0.35 | 14 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Václav Snáel | 1 | 37 | 10.63 |
Pavel Moravec | 2 | 245 | 23.32 |
Jaroslav Pokorný | 3 | 760 | 128.47 |