HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Reproducible scaling laws for contrastive language-image learning

Cherti Mehdi ; Beaumont Romain ; Wightman Ross ; Wortsman Mitchell ; Ilharco Gabriel ; Gordon Cade ; Schuhmann Christoph ; Schmidt Ludwig ; Jitsev Jenia

Reproducible scaling laws for contrastive language-image learning

Abstract

Scaling up neural networks has led to remarkable performance across a widerange of tasks. Moreover, performance often follows reliable scaling laws as afunction of training set size, model size, and compute, which offers valuableguidance as large-scale experiments are becoming increasingly expensive.However, previous work on scaling laws has primarily used private data \&models or focused on uni-modal language or vision learning. To address theselimitations, we investigate scaling laws for contrastive language-imagepre-training (CLIP) with the public LAION dataset and the open-source OpenCLIPrepository. Our large-scale experiments involve models trained on up to twobillion image-text pairs and identify power law scaling for multiple downstreamtasks including zero-shot classification, retrieval, linear probing, andend-to-end fine-tuning. We find that the training distribution plays a key rolein scaling laws as the OpenAI and OpenCLIP models exhibit different scalingbehavior despite identical model architectures and similar training recipes. Weopen-source our evaluation workflow and all models, including the largestpublic CLIP models, to ensure reproducibility and make scaling laws researchmore accessible. Source code and instructions to reproduce this study will beavailable at https://github.com/LAION-AI/scaling-laws-openclip

Code Repositories

laion-ai/scaling-laws-openclip
Official
pytorch
Mentioned in GitHub
mlfoundations/open_clip
pytorch
Mentioned in GitHub
eify/open_clip
pytorch
Mentioned in GitHub
nahidalam/open_clip
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetOpenCLIP ViT-H/14
Top 1 Accuracy: 88.5%
open-vocabulary-attribute-detection-on-ovad-1Open CLIP ViT-B32
mean average precision: 17.0
zero-shot-cross-modal-retrieval-on-flickr30kOpenCLIP VIT-H/14
Image-to-text R@1: -
Image-to-text R@10: -
Image-to-text R@5: 99.3
Text-to-image R@1: -
Text-to-image R@10: -
Text-to-image R@5: 94.1
zero-shot-image-classification-on-country211OpenClip H/14 (34B)(Laion2B)
Top-1 accuracy: 30.01

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Reproducible scaling laws for contrastive language-image learning | Papers | HyperAI