Title
Disentangling Image Distortions In Deep Feature Space
Abstract
Previous literature suggests that perceptual similarity is an emergent property shared across deep visual representations. Experiments conducted on a dataset of human-judged image distortions have proven that deep features outperform classic perceptual metrics. In this work we take a further step in the direction of a broader understanding of such property by analyzing the capability of deep visual representations to intrinsically characterize different types of image distortions. To this end, we firstly generate a number of synthetically distorted images and then we analyze the features extracted by different layers of different Deep Neural Networks. We observe that a dimension-reduced representation of the features extracted from a given layer permits to efficiently separate types of distortions in the feature space. Moreover, each network layer exhibits a different ability to separate between different types of distortions, and this ability varies according to the network architecture. Finally, we evaluate the exploitation of features taken from the layer that better separates image distortions for: i) reduced-reference image quality assessment, and ii) distortion types and severity levels characterization on both single and multiple distortion databases. Results achieved on both tasks suggest that deep visual representations can be unsupervisedly employed to efficiently characterize various image distortions. (c) 2021 Elsevier B.V. All rights reserved.
Year
DOI
Venue
2021
10.1016/j.patrec.2021.05.008
PATTERN RECOGNITION LETTERS
Keywords
DocType
Volume
Image quality, Deep representations, Convolutional neural networks, Unsupervised learning
Journal
148
ISSN
Citations 
PageRank 
0167-8655
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Simone Bianco122624.48
Luigi Celona2667.70
Paolo Napoletano333937.19
Raimondo Schettini41476154.06