HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Multi-label Cluster Discrimination for Visual Representation Learning

Xiang An; Kaicheng Yang; Xiangzi Dai; Ziyong Feng; Jiankang Deng

Multi-label Cluster Discrimination for Visual Representation Learning

Abstract

Contrastive Language Image Pre-training (CLIP) has recently demonstrated success across various tasks due to superior feature representation empowered by image-text contrastive learning. However, the instance discrimination method used by CLIP can hardly encode the semantic structure of training data. To handle this limitation, cluster discrimination has been proposed through iterative cluster assignment and classification. Nevertheless, most cluster discrimination approaches only define a single pseudo-label for each image, neglecting multi-label signals in the image. In this paper, we propose a novel Multi-Label Cluster Discrimination method named MLCD to enhance representation learning. In the clustering step, we first cluster the large-scale LAION-400M dataset into one million centers based on off-the-shelf embedding features. Considering that natural images frequently contain multiple visual objects or attributes, we select the multiple closest centers as auxiliary class labels. In the discrimination step, we design a novel multi-label classification loss, which elegantly separates losses from positive classes and negative classes, and alleviates ambiguity on decision boundary. We validate the proposed multi-label cluster discrimination method with experiments on different scales of models and pre-training datasets. Experimental results show that our method achieves state-of-the-art performance on multiple downstream tasks including linear probe, zero-shot classification, and image-text retrieval. Code and models have been released at https://github.com/deepglint/unicom .

Code Repositories

deepglint/unicom
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
referring-expression-segmentation-on-refcocoMLCD-Seg-7B
Overall IoU: 83.6
referring-expression-segmentation-on-refcoco-3MLCD-Seg-7B
Overall IoU: 79.4
referring-expression-segmentation-on-refcoco-4MLCD-Seg-7B
Overall IoU: 82.9
referring-expression-segmentation-on-refcoco-5MLCD-Seg-7B
Overall IoU: 75.6
referring-expression-segmentation-on-refcoco-8MLCD-Seg-7B
Overall IoU: 85.3
referring-expression-segmentation-on-refcoco-9MLCD-Seg-7B
Overall IoU: 81.5
referring-expression-segmentation-on-refcocogMLCD-Seg-7B
Overall IoU: 79.9
referring-expression-segmentation-on-refcocog-1MLCD-Seg-7B
Overall IoU: 80.5
visual-question-answering-on-docvqa-testMLCD-Embodied-7B
ANLS: 0.916

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Multi-label Cluster Discrimination for Visual Representation Learning | Papers | HyperAI