HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

Delin Qu; Haoming Song; Qizhi Chen; Yuanqi Yao; Xinyi Ye; Yan Ding; Zhigang Wang; JiaYuan Gu; Bin Zhao; Dong Wang; Xuelong Li

SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

Abstract

In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a vision-language model with 1.1 Million real-world robot episodes, to learn a generalist manipulation policy across multiple robot environments and tasks. After pre-training, SpatialVLA is directly applied to perform numerous tasks in a zero-shot manner. The superior results in both simulation and real-world robots demonstrate its advantage of inferring complex robot motion trajectories and its strong in-domain multi-task generalization ability. We further show the proposed Adaptive Action Grids offer a new and effective way to fine-tune the pre-trained SpatialVLA model for new simulation and real-world setups, where the pre-learned action grids are re-discretized to capture robot-specific spatial action movements of new setups. The superior results from extensive evaluations demonstrate the exceptional in-distribution generalization and out-of-distribution adaptation capability, highlighting the crucial benefit of the proposed spatial-aware representations for generalist robot policy learning. All the details and codes will be open-sourced.

Benchmarks

BenchmarkMethodologyMetrics
robot-manipulation-on-simpler-envSpatialVLA
Variant Aggregation: 0.688
Variant Aggregation-Move Near: 0.717
Variant Aggregation-Open/Close Drawer: 0.362
Variant Aggregation-Pick Coke Can: 0.895
Visual Matching: 0.719
Visual Matching-Move Near: 0.696
Visual Matching-Open/Close Drawer: 0.593
Visual Matching-Pick Coke Can: 0.810
robot-manipulation-on-simplerenv-widow-xSpatialVLA
Average: 0.344
Put Carrot on Plate: 0.208
Put Spoon on Towel: 0.208
Stack Green Block on Yellow Block: 0.250

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model | Papers | HyperAI