Command Palette
Search for a command to run...
Minesh Mathew Dimosthenis Karatzas C.V. Jawahar

Abstract
We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa.org
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-question-answering-on-docvqa-test | BERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline | ANLS: 0.665 Accuracy: 55.77 |
| visual-question-answering-on-docvqa-test | Human | ANLS: 0.9436 |
| visual-question-answering-on-docvqa-val | đm bk | bk lôn: 0.655 |
| visual-question-answering-on-docvqa-val | BERT LARGE Baseline | Accuracy: 54.48 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.