HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge

Sahithya Ravi Aditya Chinchure Leonid Sigal Renjie Liao Vered Shwartz

VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge

Abstract

There has been a growing interest in solving Visual Question Answering (VQA) tasks that require the model to reason beyond the content present in the image. In this work, we focus on questions that require commonsense reasoning. In contrast to previous methods which inject knowledge from static knowledge bases, we investigate the incorporation of contextualized knowledge using Commonsense Transformer (COMET), an existing knowledge model trained on human-curated knowledge bases. We propose a method to generate, select, and encode external commonsense knowledge alongside visual and textual cues in a new pre-trained Vision-Language-Commonsense transformer model, VLC-BERT. Through our evaluation on the knowledge-intensive OK-VQA and A-OKVQA datasets, we show that VLC-BERT is capable of outperforming existing models that utilize static knowledge bases. Furthermore, through a detailed analysis, we explain which questions benefit, and which don't, from contextualized commonsense knowledge from COMET.

Code Repositories

aditya10/vlc-bert
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-a-okvqaVLC-BERT
DA VQA Score: 38.05
visual-question-answering-on-ok-vqaVLC-BERT
Accuracy: 43.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge | Papers | HyperAI