Command Palette
Search for a command to run...
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
Jiasen Lu; Dhruv Batra; Devi Parikh; Stefan Lee

Abstract
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-question-answering-on-a-okvqa | ViLBERT - OK-VQA | DA VQA Score: 9.2 MC Accuracy: 34.1 |
| visual-question-answering-on-a-okvqa | ViLBERT | DA VQA Score: 25.9 MC Accuracy: 41.5 |
| visual-question-answering-on-a-okvqa | ViLBERT - VQA | DA VQA Score: 12.0 MC Accuracy: 42.1 |
| visual-question-answering-on-vqa-v2-test-dev | ViLBERT | Accuracy: 70.55 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.