HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

Peter Anderson; Qi Wu; Damien Teney; Jake Bruce; Mark Johnson; Niko Sünderhauf; Ian Reid; Stephen Gould; Anton van den Hengel

Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

Abstract

A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.

Code Repositories

MarSaKi/NvEM
pytorch
Mentioned in GitHub
peteanderson80/Matterport3DSimulator
Official
pytorch
Mentioned in GitHub
batra-mlp-lab/vln-chasing-ghosts
pytorch
Mentioned in GitHub
google-research-datasets/RxR
tf
Mentioned in GitHub
YicongHong/Recurrent-VLN-BERT
pytorch
Mentioned in GitHub
batra-mlp-lab/vln-sim2real
pytorch
Mentioned in GitHub
YicongHong/Entity-Graph-VLN
pytorch
Mentioned in GitHub
hlr/vln-trans
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-navigation-on-room-to-room-1Seq2Seq baseline
spl: 0.18

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments | Papers | HyperAI