HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A Closer Look at the Explainability of Contrastive Language-Image Pre-training

Yi Li Hualiang Wang Yiqun Duan Jiheng Zhang Xiaomeng Li

A Closer Look at the Explainability of Contrastive Language-Image Pre-training

Abstract

Contrastive language-image pre-training (CLIP) is a powerful vision-language model that has shown great benefits for various tasks. However, we have identified some issues with its explainability, which undermine its credibility and limit the capacity for related tasks. Specifically, we find that CLIP tends to focus on background regions rather than foregrounds, with noisy activations at irrelevant positions on the visualization results. These phenomena conflict with conventional explainability methods based on the class attention map (CAM), where the raw model can highlight the local foreground regions using global supervision without alignment. To address these problems, we take a closer look at its architecture and features. Based on thorough analyses, we find the raw self-attentions link to inconsistent semantic regions, resulting in the opposite visualization. Besides, the noisy activations are owing to redundant features among categories. Building on these insights, we propose the CLIP Surgery for reliable CAM, a method that allows surgery-like modifications to the inference architecture and features, without further fine-tuning as classical CAM methods. This approach significantly improves the explainability of CLIP, surpassing existing methods by large margins. Besides, it enables multimodal visualization and extends the capacity of raw CLIP on open-vocabulary tasks without extra alignment. The code is available at https://github.com/xmed-lab/CLIP_Surgery.

Code Repositories

xmed-lab/clipn
pytorch
Mentioned in GitHub
xmed-lab/clip_surgery
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
open-vocabulary-semantic-segmentation-onCLIP Surgery (CLIP without any fine-tuning)
mIoU: 31.4
open-vocabulary-semantic-segmentation-on-1CLIP Surgery (original CLIP without any fine-tuning)
mIoU: 29.3
open-vocabulary-semantic-segmentation-on-cocoCLIP Surgery (original CLIP without any fine-tuning)
mIoU: 21.9
zero-shot-segmentation-on-ade20k-trainingCLIPSurgery
mIoU: 12.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
A Closer Look at the Explainability of Contrastive Language-Image Pre-training | Papers | HyperAI