HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models

Xinghang Li Peiyan Li Minghuan Liu Dong Wang Jirong Liu Bingyi Kang Xiao Ma Tao Kong Hanbo Zhang Huaping Liu

Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models

Abstract

Foundation Vision Language Models (VLMs) exhibit strong capabilities in multi-modal representation learning, comprehension, and reasoning. By injecting action components into the VLMs, Vision-Language-Action Models (VLAs) can be naturally formed and also show promising performance. Existing work has demonstrated the effectiveness and generalization of VLAs in multiple scenarios and tasks. Nevertheless, the transfer from VLMs to VLAs is not trivial since existing VLAs differ in their backbones, action-prediction formulations, data distributions, and training recipes. This leads to a missing piece for a systematic understanding of the design choices of VLAs. In this work, we disclose the key factors that significantly influence the performance of VLA and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures, and when to add cross-embodiment data. The obtained results convince us firmly to explain why we need VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures, and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research. We open-source all details, including codes, models, datasets, and toolkits, along with detailed training and evaluation recipes at: robovlms.github.io.

Code Repositories

Robot-VLAs/RoboVLMs
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
robot-manipulation-on-simpler-envRoboVLM
Variant Aggregation: 0.463
Variant Aggregation-Move Near: 0.560
Variant Aggregation-Open/Close Drawer: 0.085
Variant Aggregation-Pick Coke Can: 0.683
Visual Matching: 0.563
Visual Matching-Move Near: 0.663
Visual Matching-Open/Close Drawer: 0.268
Visual Matching-Pick Coke Can: 0.727
robot-manipulation-on-simplerenv-widow-xRoboVLM
Average: 0.135
Put Carrot on Plate: 0.250
Put Eggplant in Yellow Basket: 0.000
Put Spoon on Towel: 0.208
Stack Green Block on Yellow Block: 0.083

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models | Papers | HyperAI