HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

DocVQA: A Dataset for VQA on Document Images

Minesh Mathew Dimosthenis Karatzas C.V. Jawahar

DocVQA: A Dataset for VQA on Document Images

Abstract

We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa.org

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-docvqa-testBERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline
ANLS: 0.665
Accuracy: 55.77
visual-question-answering-on-docvqa-testHuman
ANLS: 0.9436
visual-question-answering-on-docvqa-valđm bk
bk lôn: 0.655
visual-question-answering-on-docvqa-valBERT LARGE Baseline
Accuracy: 54.48

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
DocVQA: A Dataset for VQA on Document Images | Papers | HyperAI