HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Masked Visual Pre-training for Motor Control

Tete Xiao Ilija Radosavovic Trevor Darrell Jitendra Malik

Masked Visual Pre-training for Motor Control

Abstract

This paper shows that self-supervised visual pre-training from real-world images is effective for learning motor control tasks from pixels. We first train the visual representations by masked modeling of natural images. We then freeze the visual encoder and train neural network controllers on top with reinforcement learning. We do not perform any task-specific fine-tuning of the encoder; the same visual representations are used for all motor control tasks. To the best of our knowledge, this is the first self-supervised model to exploit real-world images at scale for motor control. To accelerate progress in learning from pixels, we contribute a benchmark suite of hand-designed tasks varying in movements, scenes, and robots. Without relying on labels, state-estimation, or expert demonstrations, we consistently outperform supervised encoders by up to 80% absolute success rate, sometimes even matching the oracle state performance. We also find that in-the-wild images, e.g., from YouTube or Egocentric videos, lead to better visual representations for various manipulation tasks than ImageNet images.

Code Repositories

ir413/mvp
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
robot-manipulation-generalization-on-theMVP
Average decrease average across all perturbations: -16.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Masked Visual Pre-training for Motor Control | Papers | HyperAI