HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Multimodal Residual Learning for Visual QA

Jin-Hwa Kim; Sang-Woo Lee; Dong-Hyun Kwak; Min-Oh Heo; Jeonghee Kim; Jung-Woo Ha; Byoung-Tak Zhang

Multimodal Residual Learning for Visual QA

Abstract

Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to multimodality remain limited. We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning. Unlike the deep residual learning, MRN effectively learns the joint representation from vision and language information. The main idea is to use element-wise multiplication for the joint residual mappings exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our study. We achieve the state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method to visualize the attention effect of the joint representations for each learning block using back-propagation algorithm, even though the visual features are collapsed without spatial information.

Code Repositories

jnhwkim/nips-mrn-vqa
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-coco-visual-1MRN
Percentage correct: 66.3
visual-question-answering-on-coco-visual-4MRN + global features
Percentage correct: 61.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Multimodal Residual Learning for Visual QA | Papers | HyperAI