
摘要
在本研究中,我们提出了一种概念上简单且有效的方法来训练一个强大的双语/多语言多模态表示模型。我们从OpenAI发布的预训练多模态表示模型CLIP出发,将其文本编码器替换为预训练的多语言文本编码器XLM-R,并通过两阶段训练方案(包括教师学习和对比学习)对语言和图像表示进行对齐。我们通过广泛的任务评估验证了该方法的有效性。我们在多个任务上取得了新的最先进性能,包括ImageNet-CN、Flicker30k-CN、COCO-CN和XTD。此外,我们的模型在几乎所有任务上的表现都与CLIP非常接近,这表明可以通过简单地替换CLIP中的文本编码器来扩展其功能,例如实现多语言理解。我们的模型和代码已发布在https://github.com/FlagAI-Open/FlagAI。
代码仓库
flagai-open/flagai
官方
pytorch
GitHub 中提及
pwc-1/Paper-8/tree/main/altclip
mindspore
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| zero-shot-cross-modal-retrieval-on-flickr30k | AltCLIP | Image-to-text R@1: 86 Image-to-text R@10: 99.1 Image-to-text R@5: 98 Text-to-image R@1: 72.5 Text-to-image R@10: 95.4 Text-to-image R@5: 91.6 |
| zero-shot-transfer-image-classification-on-1 | AltCLIP | Accuracy (Private): 74.5 |
| zero-shot-transfer-image-classification-on-3 | AltCLIP | Accuracy (Private): 68.1 |
| zero-shot-transfer-image-classification-on-4 | AltCLIP | Accuracy: 87.2 |
| zero-shot-transfer-image-classification-on-5 | AltCLIP | Accuracy (Private): 69.5 |
| zero-shot-transfer-image-classification-on-8 | AltCLIP | Accuracy (Private): 58.7 |
| zero-shot-transfer-image-classification-on-cn | AltCLIP | Accuracy (Private): 59.6 |
| zero-shot-transfer-image-classification-on-cn-1 | AltCLIP | Accuracy (Private): 46.5 |
| zero-shot-transfer-image-classification-on-cn-2 | AltCLIP | Accuracy (Private): 58.5 |
| zero-shot-transfer-image-classification-on-cn-3 | AltCLIP | Accuracy (Private): 79.9 |
| zero-shot-transfer-image-classification-on-cn-4 | AltCLIP | Accuracy (Private): 50.9 |