HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

R3M: A Universal Visual Representation for Robot Manipulation

Suraj Nair Aravind Rajeswaran Vikash Kumar Chelsea Finn Abhinav Gupta

R3M: A Universal Visual Representation for Robot Manipulation

Abstract

We study how visual representations pre-trained on diverse human video data can enable data-efficient learning of downstream robotic manipulation tasks. Concretely, we pre-train a visual representation using the Ego4D human video dataset using a combination of time-contrastive learning, video-language alignment, and an L1 penalty to encourage sparse and compact representations. The resulting representation, R3M, can be used as a frozen perception module for downstream policy learning. Across a suite of 12 simulated robot manipulation tasks, we find that R3M improves task success by over 20% compared to training from scratch and by over 10% compared to state-of-the-art visual representations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika Panda arm to learn a range of manipulation tasks in a real, cluttered apartment given just 20 demonstrations. Code and pre-trained models are available at https://tinyurl.com/robotr3m.

Code Repositories

facebookresearch/r3m
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
robot-manipulation-generalization-on-theR3M
Average decrease average across all perturbations: -49.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
R3M: A Universal Visual Representation for Robot Manipulation | Papers | HyperAI