HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Dual Attention Networks for Multimodal Reasoning and Matching

Hyeonseob Nam; Jung-Woo Ha; Jeonghee Kim

Dual Attention Networks for Multimodal Reasoning and Matching

Abstract

We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
image-retrieval-on-flickr30k-1k-testDAN
R@1: 39.4
R@10: 79.1
R@5: 69.2
visual-question-answering-on-vqa-v1-test-devDAN (ResNet)
Accuracy: 64.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Dual Attention Networks for Multimodal Reasoning and Matching | Papers | HyperAI