HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

SpatialBot: Precise Spatial Understanding with Vision Language Models

Wenxiao Cai Iaroslav Ponomarenko Jianhao Yuan Xiaoqi Li Wankou Yang Hao Dong Bo Zhao

SpatialBot: Precise Spatial Understanding with Vision Language Models

Abstract

Vision Language Models (VLMs) have achieved impressive performance in 2D image understanding, however they are still struggling with spatial understanding which is the foundation of Embodied AI. In this paper, we propose SpatialBot for better spatial understanding by feeding both RGB and depth images. Additionally, we have constructed the SpatialQA dataset, which involves multi-level depth-related questions to train VLMs for depth understanding. Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities in spatial understanding at different levels. Extensive experiments on our spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks, demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.

Code Repositories

baai-dcai/spatialbot
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
spatial-reasoning-on-6-dof-spatialbenchSpatialBot
Orientation-abs: 22.9
Orientation-rel: 39.6
Position-abs: 21.6
Position-rel: 50.9
Total: 32.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
SpatialBot: Precise Spatial Understanding with Vision Language Models | Papers | HyperAI