Title
Attributable Visual Similarity Learning
Abstract
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and ex-plainable similarity measure between images. Most existing similarity learning methods exacerbate the unexplain-ability by mapping each sample to a single point in the em-bedding space with a distance metric (e.g., Mahalanobis distance, Euclidean distance). Motivated by the human se-mantic similarity cognition, we propose a generalized simi-larity learning paradigm to represent the similarity between two images with a graph and then infer the overall simi-larity accordingly. Furthermore, we establish a bottom-up similarity construction and top-down similarity inference framework to infer the similarity based on semantic hier-archy consistency. We first identify unreliable higher-level similarity nodes and then correct them using the most co-herent adjacent lower-level similarity nodes, which simulta-neously preserve traces for similarity attribution. Extensive experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improve-ments over existing deep similarity learning methods and verify the interpretability of our framework. 1 1 Code: https://github.com/zbr17/AVSL.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00738
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Recognition: detection,categorization,retrieval, Explainable computer vision, Representation learning
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Borui Zhang100.34
Wenzhao Zheng2152.91
Jie Zhou32103190.17
Jiwen Lu43105153.88