Command Palette
Search for a command to run...
RoboUniView: Visual-Language Model with Unified View Representation for Robotic Manipulation
Fanfan Liu Feng Yan Liming Zheng Chengjian Feng Yiyang Huang Lin Ma

Abstract
Utilizing Vision-Language Models (VLMs) for robotic manipulation represents a novel paradigm, aiming to enhance the model's ability to generalize to new objects and instructions. However, due to variations in camera specifications and mounting positions, existing methods exhibit significant performance disparities across different robotic platforms. To address this challenge, we propose RoboUniView in this paper, an innovative approach that decouples visual feature extraction from action learning. We first learn a unified view representation from multi-perspective views by pre-training on readily accessible data, and then derive actions from this unified view representation to control robotic manipulation. This unified view representation more accurately mirrors the physical world and is not constrained by the robotic platform's camera parameters. Thanks to this methodology, we achieve state-of-the-art performance on the demanding CALVIN benchmark, enhancing the success rate in the $D \to D$ setting from 93.0% to 96.2%, and in the $ABC \to D$ setting from 92.2% to 94.2%. Moreover, our model exhibits outstanding adaptability and flexibility: it maintains high performance under unseen camera parameters, can utilize multiple datasets with varying camera parameters, and is capable of joint cross-task learning across datasets. Code is provided for re-implementation. https://github.com/liufanfanlff/RoboUniview
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| robot-manipulation-on-calvin | RoboUniView(Ours) | avg. sequence length (D to D): 3.855 |
| zero-shot-generalization-on-calvin | RoboUniView | Avg. sequence length: 3.647 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.