HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Incorporating Global Visual Features into Attention-Based Neural Machine Translation

Iacer Calixto; Qun Liu; Nick Campbell

Incorporating Global Visual Features into Attention-Based Neural Machine Translation

Abstract

We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder. We utilise global image features extracted using a pre-trained convolutional neural network and incorporate them (i) as words in the source sentence, (ii) to initialise the encoder hidden state, and (iii) as additional data to initialise the decoder hidden state. In our experiments, we evaluate how these different strategies to incorporate global image features compare and which ones perform best. We also study the impact that adding synthetic multi-modal, multilingual data brings and find that the additional data have a positive impact on multi-modal models. We report new state-of-the-art results and our best models also significantly improve on a comparable phrase-based Statistical MT (PBSMT) model trained on the Multi30k data set according to all metrics evaluated. To the best of our knowledge, it is the first time a purely neural model significantly improves over a PBSMT model on all metrics evaluated on this data set.

Benchmarks

BenchmarkMethodologyMetrics
multimodal-machine-translation-on-multi30kIMGD
BLEU (EN-DE): 37.3
Meteor (EN-DE): 55.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Incorporating Global Visual Features into Attention-Based Neural Machine Translation | Papers | HyperAI