HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering

Wenjin Wang Yunhao Li Yixin Ou Yin Zhang

Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering

Abstract

Layout-aware pre-trained models has achieved significant progress on document image question answering. They introduce extra learnable modules into existing language models to capture layout information within document images from text bounding box coordinates obtained by OCR tools. However, extra modules necessitate pre-training on extensive document images. This prevents these methods from directly utilizing off-the-shelf instruction-tuning language foundation models, which have recently shown promising potential in zero-shot learning. Instead, in this paper, we find that instruction-tuning language models like Claude and ChatGPT can understand layout by spaces and line breaks. Based on this observation, we propose the LAyout and Task aware Instruction Prompt (LATIN-Prompt), which consists of layout-aware document content and task-aware instruction. Specifically, the former uses appropriate spaces and line breaks to recover the layout information among text segments obtained by OCR tools, and the latter ensures that generated answers adhere to formatting requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning (LATIN-Tuning) to improve the performance of small instruction-tuning models like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot performance of Claude and ChatGPT to be comparable to the fine-tuning performance of SOTAs on document image question answering, and LATIN-Tuning enhances the zero-shot performance of Alpaca significantly. For example, LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263% and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will release it to facilitate future research.

Code Repositories

deepopinion/anls-star-metric
Mentioned in GitHub
wenjinw/latin-prompt
Official
pytorch
Mentioned in GitHub
deepopinion/anls_star_metric
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-docvqa-testGPT-4
ANLS: 0.884
visual-question-answering-on-docvqa-testClaude + LATIN-Prompt
ANLS: 0.8336
visual-question-answering-on-docvqa-testGPT-3.5 + LATIN-Prompt
ANLS: 0.8255
visual-question-answering-vqa-onGPT-3.5 + LATIN-Prompt
ANLS: 48.98
visual-question-answering-vqa-onClaude + LATIN-Prompt
ANLS: 54.51

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering | Papers | HyperAI