HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

CoCa: Contrastive Captioners are Image-Text Foundation Models

Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu

CoCa: Contrastive Captioners are Image-Text Foundation Models

Abstract

Exploring large-scale pretrained foundation models is of significant interest in computer vision because these models can be quickly transferred to many downstream tasks. This paper presents Contrastive Captioner (CoCa), a minimalist design to pretrain an image-text encoder-decoder foundation model jointly with contrastive loss and captioning loss, thereby subsuming model capabilities from contrastive approaches like CLIP and generative methods like SimVLM. In contrast to standard encoder-decoder transformers where all decoder layers attend to encoder outputs, CoCa omits cross-attention in the first half of decoder layers to encode unimodal text representations, and cascades the remaining decoder layers which cross-attend to the image encoder for multimodal image-text representations. We apply a contrastive loss between unimodal image and text embeddings, in addition to a captioning loss on the multimodal decoder outputs which predicts text tokens autoregressively. By sharing the same computational graph, the two training objectives are computed efficiently with minimal overhead. CoCa is pretrained end-to-end and from scratch on both web-scale alt-text data and annotated images by treating all labels simply as text, seamlessly unifying natural language supervision for representation learning. Empirically, CoCa achieves state-of-the-art performance with zero-shot transfer or minimal task-specific adaptation on a broad range of downstream tasks, spanning visual recognition (ImageNet, Kinetics-400/600/700, Moments-in-Time), crossmodal retrieval (MSCOCO, Flickr30K, MSR-VTT), multimodal understanding (VQA, SNLI-VE, NLVR2), and image captioning (MSCOCO, NoCaps). Notably on ImageNet classification, CoCa obtains 86.3% zero-shot top-1 accuracy, 90.6% with a frozen encoder and learned classification head, and new state-of-the-art 91.0% top-1 accuracy on ImageNet with a finetuned encoder.

Code Repositories

amitakamath/whatsup_vlms
pytorch
Mentioned in GitHub
mlfoundations/open_clip
pytorch
Mentioned in GitHub
lucidrains/CoCa-pytorch
pytorch
Mentioned in GitHub
Chaolei98/FreeZAD
pytorch
Mentioned in GitHub
amitakamath/hard_positives
pytorch
Mentioned in GitHub
facebookresearch/multimodal
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
action-classification-on-kinetics-400CoCa (frozen)
Acc@1: 88.0
action-classification-on-kinetics-400CoCa (finetuned)
Acc@1: 88.9
action-classification-on-kinetics-600CoCa (finetuned)
Top-1 Accuracy: 89.4
action-classification-on-kinetics-600CoCa (frozen)
Top-1 Accuracy: 88.5
action-classification-on-kinetics-700CoCa (frozen)
Top-1 Accuracy: 81.1
action-classification-on-kinetics-700CoCa (finetuned)
Top-1 Accuracy: 82.7
action-classification-on-moments-in-time-2CoCa (finetuned)
Top 1 Accuracy: 49.0
action-classification-on-moments-in-time-2CoCa (frozen)
Top 1 Accuracy: 47.4
image-captioning-on-coco-captionsCoCa
BLEU-4: 40.9
CIDER: 143.6
METEOR: 33.9
SPICE: 24.7
image-classification-on-imagenetCoCa (finetuned)
Hardware Burden:
Number of params: 2100M
Operations per network pass:
Top 1 Accuracy: 91.0%
image-classification-on-imagenetCoCa (finetuned)
Number of params: 2100M
Top 1 Accuracy: 91.0%
image-classification-on-objectnetCoCa
Top-1 Accuracy: 82.7
video-retrieval-on-msr-vttCoCa (zero-shot)
text-to-video R@1: 30.0
text-to-video R@10: 61.6
text-to-video R@5: 52.4
video-to-text R@1: 49.9
video-to-text R@10: 81.4
video-to-text R@5: 73.4
visual-entailment-on-snli-ve-testCoCa
Accuracy: 87.1
visual-entailment-on-snli-ve-valCoCa
Accuracy: 87.0
visual-question-answering-on-vqa-v2-test-dev-1CoCa
Accuracy: 82.3
visual-reasoning-on-nlvr2-devCoCa
Accuracy: 86.1
visual-reasoning-on-nlvr2-testCoCa
Accuracy: 87.0
zero-shot-cross-modal-retrieval-on-coco-2014CoCa
Image-to-text R@1: 66.3
Image-to-text R@10: 91.8
Image-to-text R@5: 86.2
Text-to-image R@1: 51.2
Text-to-image R@10: 82.0
Text-to-image R@5: 74.2
zero-shot-cross-modal-retrieval-on-flickr30kCoCa
Image-to-text R@1: 92.5
Image-to-text R@10: 99.9
Image-to-text R@5: 99.5
Text-to-image R@1: 80.4
Text-to-image R@10: 97.7
Text-to-image R@5: 95.7
zero-shot-transfer-image-classification-on-1CoCa
Accuracy (Private): 86.3
zero-shot-transfer-image-classification-on-3CoCa
Accuracy (Private): 80.7
zero-shot-transfer-image-classification-on-4CoCa
Accuracy: 96.5
zero-shot-transfer-image-classification-on-5CoCa
Accuracy (Private): 90.2
zero-shot-transfer-image-classification-on-6CoCa
Accuracy (Private): 82.7
zero-shot-transfer-image-classification-on-8CoCa
Accuracy (Private): 77.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
CoCa: Contrastive Captioners are Image-Text Foundation Models | Papers | HyperAI