HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

An Yang Junshu Pan Junyang Lin Rui Men Yichang Zhang Jingren Zhou Chang Zhou

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

Abstract

The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP

Code Repositories

ofa-sys/chinese-clip
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-retrieval-on-coco-cnCN-CLIP (ViT-B/16)
R@1: 77.0
R@10: 99.0
R@5: 97.1
image-retrieval-on-coco-cnCN-CLIP (ViT-H/14)
R@1: 81.5
R@10: 99.1
R@5: 96.9
image-retrieval-on-coco-cnCN-CLIP (ViT-L/14@336px)
R@1: 80.1
R@10: 99.2
R@5: 96.7
image-retrieval-on-coco-cnCN-CLIP (ViT-L/14)
R@1: 78.9
R@10: 99.0
R@5: 96.3
image-retrieval-on-coco-cnCN-CLIP (RN50)
R@1: 66.8
R@10: 97.0
R@5: 91.1
image-retrieval-on-flickr30k-cnCN-CLIP (RN50)
R@1: 66.7
R@10: 94.1
R@5: 89.4
image-retrieval-on-flickr30k-cnCN-CLIP (ViT-L/14@336px)
R@1: 84.4
R@10: 98.7
R@5: 97.1
image-retrieval-on-flickr30k-cnCN-CLIP (ViT-H/14)
R@1: 83.8
R@10: 98.6
R@5: 96.9
image-retrieval-on-flickr30k-cnCN-CLIP (ViT-B/16)
R@1: 79.1
R@10: 97.4
R@5: 94.8
image-retrieval-on-flickr30k-cnCN-CLIP (ViT-L/14)
R@1: 82.7
R@10: 98.6
R@5: 96.7
image-retrieval-on-muge-retrievalCN-CLIP (ViT-H/14)
Mean Recall: 83.6
R@1: 68.9
R@10: 93.1
R@5: 88.7
image-retrieval-on-muge-retrievalCN-CLIP (RN50)
Mean Recall: 69.2
R@1: 48.6
R@10: 84.0
R@5: 75.1
image-retrieval-on-muge-retrievalCN-CLIP (ViT-B/16)
Mean Recall: 77.4
R@1: 58.4
R@10: 90.0
R@5: 83.6
image-retrieval-on-muge-retrievalCN-CLIP (ViT-L/14)
Mean Recall: 80.1
R@1: 63.3
R@10: 91.3
R@5: 85.6
image-retrieval-on-muge-retrievalCN-CLIP (ViT-L/14@336px)
Mean Recall: 81.3
R@1: 65.3
R@10: 92.1
R@5: 86.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese | Papers | HyperAI