Title
Zero-Shot Learning Via Visual Abstraction
Abstract
One of the main challenges in learning fine-grained visual categories is gathering training images. Recent work in Zero-Shot Learning (ZSL) circumvents this challenge by describing categories via attributes or text. However, not all visual concepts, e.g., two people dancing, are easily amenable to such descriptions. In this paper, we propose a new modality for ZSL using visual abstraction to learn difficult-to-describe concepts. Specifically, we explore concepts related to people and their interactions with others. Our proposed modality allows one to provide training data by manipulating abstract visualizations, e.g., one can illustrate interactions between two clipart people by manipulating each person's pose, expression, gaze, and gender. The feasibility of our approach is shown on a human pose dataset and a new dataset containing complex interactions between two people, where we outperform several baselines. To better match across the two domains, we learn an explicit mapping between the abstract and real worlds.
Year
DOI
Venue
2014
10.1007/978-3-319-10593-2_27
COMPUTER VISION - ECCV 2014, PT IV
Keywords
Field
DocType
zero-shot learning, visual abstraction, synthetic data, pose
Training set,Abstraction,Gaze,Zero shot learning,Computer science,Synthetic data,Artificial intelligence,Machine learning
Conference
Volume
ISSN
Citations 
8692
0302-9743
27
PageRank 
References 
Authors
1.13
29
3
Name
Order
Citations
PageRank
Stanislaw Antol135610.61
C. Lawrence Zitnick27321332.72
Devi Parikh32929132.01