HyperAIHyperAI

Command Palette

Search for a command to run...

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

Zekun Qi Runpei Dong Shaochen Zhang Haoran Geng Chunrui Han Zheng Ge Li Yi Kaisheng Ma

Abstract

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model(LLM) designed for embodied interaction, exploring a universal 3D objectunderstanding with 3D point clouds and languages. ShapeLLM is built upon animproved 3D encoder by extending ReCon to ReCon++ that benefits from multi-viewimage distillation for enhanced geometry understanding. By utilizing ReCon++ asthe 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructedinstruction-following data and tested on our newly human-curated benchmark, 3DMM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3Dgeometry understanding and language-unified 3D interaction tasks, such asembodied visual grounding. Project page: https://qizekun.github.io/shapellm/


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction | Papers | HyperAI