
摘要
现有的大多数视觉语言预训练方法依赖于通过对象检测提取的以对象为中心的特征,并在提取的特征与文本之间进行细粒度对齐。这些方法在学习多个对象之间的关系时面临挑战。为此,我们提出了一种新的方法,称为X-VLM,用于执行“多粒度视觉语言预训练”。学习多粒度对齐的关键在于根据相关文本定位图像中的视觉概念,并同时将文本与视觉概念对齐,其中对齐具有多粒度特性。实验结果表明,X-VLM能够有效利用所学的多粒度对齐,在许多下游视觉语言任务中表现出色,并且始终优于现有最先进方法。
代码仓库
zengyan-97/x-vlm
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| cross-modal-retrieval-on-coco-2014 | X-VLM (base) | Image-to-text R@1: 81.2 Image-to-text R@10: 98.2 Image-to-text R@5: 95.6 Text-to-image R@1: 63.4 Text-to-image R@10: 91.5 Text-to-image R@5: 85.8 |
| cross-modal-retrieval-on-flickr30k | X-VLM (base) | Image-to-text R@1: 97.1 Image-to-text R@10: 100.0 Image-to-text R@5: 100.0 Text-to-image R@1: 86.9 Text-to-image R@10: 98.7 Text-to-image R@5: 97.3 |
| image-captioning-on-coco-captions | X-VLM (base) | BLEU-4: 41.3 CIDER: 140.8 |
| image-retrieval-on-flickr30k-1k-test | X-VLM (base) | R@1: 86.9 R@10: 98.7 R@5: 97.3 |
| open-vocabulary-attribute-detection-on-ovad-1 | X-VLM | mean average precision: 28.0 |
| visual-grounding-on-refcoco-test-b | X-VLM (base) | Accuracy (%): 76.91 |
| visual-grounding-on-refcoco-testa | X-VLM (base) | Accuracy (%): 89.00 |
| visual-grounding-on-refcoco-val | X-VLM (base) | Accuracy (%): 84.51 |
| visual-question-answering-on-vqa-v2-test-dev | X-VLM (base) | Accuracy: 78.22 |
| visual-reasoning-on-nlvr2-dev | X-VLM (base) | Accuracy: 84.41 |
| visual-reasoning-on-nlvr2-test | X-VLM (base) | Accuracy: 84.76 |