HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Distilling Translations with Visual Awareness

Julia Ive; Pranava Madhyastha; Lucia Specia

Distilling Translations with Visual Awareness

Abstract

Previous work on multimodal machine translation has shown that visual information is only needed in very specific cases, for example in the presence of ambiguous words where the textual context is not sufficient. As a consequence, models tend to learn to ignore this information. We propose a translate-and-refine approach to this problem where images are only used by a second stage decoder. This approach is trained jointly to generate a good first draft translation and to improve over this draft by (i) making better use of the target language textual context (both left and right-side contexts) and (ii) making use of visual context. This approach leads to the state of the art results. Additionally, we show that it has the ability to recover from erroneous or missing words in the source language.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
multimodal-machine-translation-on-multi30kdel
Meteor (EN-FR): 74.6
multimodal-machine-translation-on-multi30kdel+obj
BLEU (EN-DE): 38
Meteor (EN-DE): 55.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Distilling Translations with Visual Awareness | Papers | HyperAI