Title
A generic model to compose vision modules for holistic scene understanding
Abstract
The problem of holistic scene understanding involves many vision tasks such as depth estimation, scene categorization, event categorization, etc. Each of these tasks explores some aspects of the scene but, these tasks are related in that, they represent attributes of the same scene. An intuition is that one task can provide meaningful attributes to aid the learning process of another task. In this work, we propose a generic model (together with learning and inference techniques) for connecting different vision tasks in the form of a 2-layer cascade. Our model considers the first layer as a hidden layer, where the latent variables are inferred by feedback from the second layer. The feedback mechanism allows the first layer classifiers to focus on more important image modes, and draws their output towards "attributes" rather than the original "labels". Our model also automatically discovers sparse connections between the learned attributes on the first layer and the target task on the second layer. Note that in our model, the same vision tasks can act as attribute learners as well as target tasks, while being set up on different layers. In extensive experiments, we show that the same proposed model improves the performance in all the tasks we consider: single image depth estimation, scene categorization, saliency detection and event categorization.
Year
DOI
Venue
2010
10.1007/978-3-642-35749-7_6
ECCV Workshops (1)
Keywords
Field
DocType
generic model,event categorization,holistic scene understanding,hidden layer,vision module,target task,scene categorization,layer classifier,different layer,vision task
Categorization,Computer vision,Inference,Salience (neuroscience),Computer science,Intuition,Latent variable,Artificial intelligence,Machine learning
Conference
Citations 
PageRank 
References 
4
0.49
20
Authors
4
Name
Order
Citations
PageRank
Congcong Li124016.48
Adarsh Kowdle258424.77
Ashutosh Saxena34575227.88
Tsuhan Chen44763346.32