Title
Representing scenes for real-time context classification on mobile devices.
Abstract
In this paper we introduce the DCT-GIST image representation model which is useful to summarize the context of the scene. The proposed image descriptor addresses the problem of real-time scene context classification on devices with limited memory and low computational resources (e.g., mobile and other single sensor devices such as wearable cameras). Images are holistically represented starting from the statistics collected in the Discrete Cosine Transform (DCT) domain. Since the DCT coefficients are usually computed within the digital signal processor for the JPEG conversion/storage, the proposed solution allows to obtain an instant and \"free of charge\" image signature. The novel image representation exploits the DCT coefficients of natural images by modelling them as Laplacian distributions which are summarized by the scale parameter in order to capture the context of the scene. Only discriminative DCT frequencies corresponding to edges and textures are retained to build the descriptor of the image. A spatial hierarchy approach allows to collect the DCT statistics on image sub-regions to better encode the spatial envelope of the scene. The proposed image descriptor is coupled with a Support Vector Machine classifier for context recognition purpose. Experiments on the well-known 8 Scene Context Dataset as well as on the MIT-67 Indoor Scene dataset demonstrate that the proposed representation technique achieves better results with respect to the popular GIST descriptor, outperforming this last representation also in terms of computational costs. Moreover, the experiments pointed out that the proposed representation model closely matches other state-of-the-art methods based on bag of Textons collected on spatial hierarchy. HighlightsA new image descriptor for scene context classification purpose.The descriptor is suitable for Image Generation Pipeline of single sensor devices.The descriptor is computed directly on compressed domain (JPEG).The descriptor is computed in realtime on platform with low computational resources.The extraction process does not need extra information (e.g., a visual vocabulary).
Year
DOI
Venue
2015
10.1016/j.patcog.2014.05.014
Pattern Recognition
Keywords
Field
DocType
mobile devices,jpeg
Computer vision,Pattern recognition,Wearable computer,Computer science,Digital signal processor,Discrete cosine transform,JPEG,Mobile device,Artificial intelligence,Hierarchy,Discriminative model,Scale parameter
Journal
Volume
Issue
ISSN
48
4
0031-3203
Citations 
PageRank 
References 
9
0.63
17
Authors
5
Name
Order
Citations
PageRank
Giovanni Maria Farinella141257.13
Daniele Ravì223212.31
Valeria Tomaselli3297.49
Mirko Guarnera4536.59
Sebastiano Battiato565978.73