Title
Shape2Vec: semantic-based descriptors for 3D shapes, sketches and images.
Abstract
Convolutional neural networks have been successfully used to compute shape descriptors, or jointly embed shapes and sketches in a common vector space. We propose a novel approach that leverages both labeled 3D shapes and semantic information contained in the labels, to generate semantically-meaningful shape descriptors. A neural network is trained to generate shape descriptors that lie close to a vector representation of the shape class, given a vector space of words. This method is easily extendable to range scans, hand-drawn sketches and images. This makes cross-modal retrieval possible, without a need to design different methods depending on the query type. We show that sketch-based shape retrieval using semantic-based descriptors outperforms the state-of-the-art by large margins, and mesh-based retrieval generates results of higher relevance to the query, than current deep shape descriptors.
Year
DOI
Venue
2016
10.1145/2980179.2980253
ACM Trans. Graph.
Keywords
Field
DocType
shape descriptor,word vector space,semantic-based,depthmap,2D sketch,deep learning,CNN
Computer vision,Vector space,Pattern recognition,Computer science,Convolutional neural network,3d shapes,Semantic information,Artificial intelligence,Deep learning,Artificial neural network
Journal
Volume
Issue
ISSN
35
6
0730-0301
Citations 
PageRank 
References 
12
0.56
21
Authors
2
Name
Order
Citations
PageRank
Flora Ponjou Tasse1534.41
Neil A. Dodgson272354.20