HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

Jiasen Lu; Dhruv Batra; Devi Parikh; Stefan Lee

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

Abstract

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.

Code Repositories

jialinwu17/tmpimgs
pytorch
Mentioned in GitHub
Mehrab-Tanjim/enforce-reasoning
pytorch
Mentioned in GitHub
zihaow123/unimm
pytorch
Mentioned in GitHub
vmurahari3/visdial-bert
pytorch
Mentioned in GitHub
jiasenlu/vilbert_beta
pytorch
Mentioned in GitHub
facebookresearch/vilbert-multi-task
pytorch
Mentioned in GitHub
hwanheelee1993/vilbertscore
pytorch
Mentioned in GitHub
johntiger1/multitask_multimodal
pytorch
Mentioned in GitHub
Mehrab-Tanjim/vilbert-rationalization
pytorch
Mentioned in GitHub
fuqianya/ViLBERT-Paddle
paddle
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-a-okvqaViLBERT - OK-VQA
DA VQA Score: 9.2
MC Accuracy: 34.1
visual-question-answering-on-a-okvqaViLBERT
DA VQA Score: 25.9
MC Accuracy: 41.5
visual-question-answering-on-a-okvqaViLBERT - VQA
DA VQA Score: 12.0
MC Accuracy: 42.1
visual-question-answering-on-vqa-v2-test-devViLBERT
Accuracy: 70.55

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks | Papers | HyperAI