Command Palette
Search for a command to run...
Elias Ramzi Nicolas Audebert Nicolas Thome Clément Rambour Xavier Bitot

Abstract
Image Retrieval is commonly evaluated with Average Precision (AP) or Recall@k. Yet, those metrics, are limited to binary labels and do not take into account errors' severity. This paper introduces a new hierarchical AP training method for pertinent image retrieval (HAP-PIER). HAPPIER is based on a new H-AP metric, which leverages a concept hierarchy to refine AP by integrating errors' importance and better evaluate rankings. To train deep models with H-AP, we carefully study the problem's structure and design a smooth lower bound surrogate combined with a clustering loss that ensures consistent ordering. Extensive experiments on 6 datasets show that HAPPIER significantly outperforms state-of-the-art methods for hierarchical retrieval, while being on par with the latest approaches when evaluating fine-grained ranking performances. Finally, we show that HAPPIER leads to better organization of the embedding space, and prevents most severe failure cases of non-hierarchical methods. Our code is publicly available at: https://github.com/elias-ramzi/HAPPIER.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-retrieval-on-inaturalist | HAPPIER_F (ResNet-50) | R@1: 71.0 |
| image-retrieval-on-inaturalist | HAPPIER (ResNet-50) | R@1: 70.7 |
| metric-learning-on-dyml-animal | HAPPIER | Average-mAP: 43.8 |
| metric-learning-on-dyml-product | HAPPIER | Average-mAP: 38.0 |
| metric-learning-on-dyml-vehicle | HAPPIER | Average-mAP: 37.0 |
| metric-learning-on-stanford-online-products-1 | HAPPIER_F | R@1: 81.8 |
| metric-learning-on-stanford-online-products-1 | HAPPIER | R@1: 81.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.