HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Vision-and-Dialog Navigation

Jesse Thomason; Michael Murray; Maya Cakmak; Luke Zettlemoyer

Vision-and-Dialog Navigation

Abstract

Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/

Code Repositories

HomeroRR/rmm
pytorch
Mentioned in GitHub
mmurray/cvdn
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-navigation-on-cooperative-vision-and-1Pansy
dist_to_end_reduction: 1.76
spl: 0.15
visual-navigation-on-cooperative-vision-and-1Seq2Seq Baseline
dist_to_end_reduction: 2.35
spl: 0.16

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Vision-and-Dialog Navigation | Papers | HyperAI