HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses

Malik Boudiaf Jérôme Rony Imtiaz Masud Ziko Eric Granger Marco Pedersoli Pablo Piantanida Ismail Ben Ayed

A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses

Abstract

Recently, substantial research efforts in Deep Metric Learning (DML) focused on designing complex pairwise-distance losses, which require convoluted schemes to ease optimization, such as sample mining or pair weighting. The standard cross-entropy loss for classification has been largely overlooked in DML. On the surface, the cross-entropy may seem unrelated and irrelevant to metric learning as it does not explicitly involve pairwise distances. However, we provide a theoretical analysis that links the cross-entropy to several well-known and recent pairwise losses. Our connections are drawn from two different perspectives: one based on an explicit optimization insight; the other on discriminative and generative views of the mutual information between the labels and the learned features. First, we explicitly demonstrate that the cross-entropy is an upper bound on a new pairwise loss, which has a structure similar to various pairwise losses: it minimizes intra-class distances while maximizing inter-class distances. As a result, minimizing the cross-entropy can be seen as an approximate bound-optimization (or Majorize-Minimize) algorithm for minimizing this pairwise loss. Second, we show that, more generally, minimizing the cross-entropy is actually equivalent to maximizing the mutual information, to which we connect several well-known pairwise losses. Furthermore, we show that various standard pairwise losses can be explicitly related to one another via bound relationships. Our findings indicate that the cross-entropy represents a proxy for maximizing the mutual information -- as pairwise losses do -- without the need for convoluted sample-mining heuristics. Our experiments over four standard DML benchmarks strongly support our findings. We obtain state-of-the-art results, outperforming recent and complex DML methods.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
metric-learning-on-cars196ResNet-50 + Cross-Entropy
R@1: 89.3
metric-learning-on-cub-200-2011ResNet-50 + Cross-Entropy
R@1: 69.2
metric-learning-on-in-shop-1ResNet-50 + Cross-Entropy
R@1: 90.6
metric-learning-on-stanford-online-products-1ResNet-50 + Cross-Entropy
R@1: 81.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses | Papers | HyperAI