Command Palette
Search for a command to run...
Borui Zhang Wenzhao Zheng Jie Zhou Jiwen Lu

Abstract
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images. Most existing similarity learning methods exacerbate the unexplainability by mapping each sample to a single point in the embedding space with a distance metric (e.g., Mahalanobis distance, Euclidean distance). Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph and then infer the overall similarity accordingly. Furthermore, we establish a bottom-up similarity construction and top-down similarity inference framework to infer the similarity based on semantic hierarchy consistency. We first identify unreliable higher-level similarity nodes and then correct them using the most coherent adjacent lower-level similarity nodes, which simultaneously preserve traces for similarity attribution. Extensive experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods and verify the interpretability of our framework. Code is available at https://github.com/zbr17/AVSL.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| metric-learning-on-cars196 | ResNet-50 + AVSL | R@1: 91.5 |
| metric-learning-on-cub-200-2011 | ResNet-50 + AVSL | R@1: 71.9 |
| metric-learning-on-stanford-online-products-1 | ResNet50 + AVSL | R@1: 79.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.