Title
Generative Training for 3D-Retrieval.
Abstract
A digital library for non-textual, multimedia documents can be defined by its functionality: markup, indexing, and retrieval. For textual documents, the techniques and algorithms to perform these tasks are well studied. For non-textual documents, these tasks are open research questions: How to markup a position on a digitized statue? What is the index of a building? How to search and query for a CAD model? If no additional, textual information is available, current approaches cluster, sort and classify non-textual documents using machine learning techniques, which have a cold start problem: they either need a manually labeled, sufficiently large training set or the (automatic) clustering / classification result may not respect semantic similarity. We solve this problem using procedural modeling techniques, which can generate arbitrary training sets without the need of any “real” data. The retrieval process itself can be performed with any method. In this article we describe the histogram of inverted distances in detail and compare it to salient local visual features method. Both techniques are evaluated using the Princeton Shape Benchmark (Shilane et al., 2004). Furthermore, we improve the retrieval results by diffusion processes.
Year
DOI
Venue
2015
10.5220/0005248300970105
GRAPP
Field
DocType
Citations 
Procedural modeling,Computer graphics (images),Computer science,Search engine indexing,Artificial intelligence,Cluster analysis,Markup language,Semantic similarity,Computer vision,Cold start,Information retrieval,sort,Visual Word
Conference
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Harald Grabner1221.57
Torsten Ullrich24711.63
Dieter W. Fellner3780112.84