Command Palette
Search for a command to run...
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Zekun Qi Runpei Dong Shaochen Zhang Haoran Geng Chunrui Han Zheng Ge Li Yi Kaisheng Ma
Abstract
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model(LLM) designed for embodied interaction, exploring a universal 3D objectunderstanding with 3D point clouds and languages. ShapeLLM is built upon animproved 3D encoder by extending ReCon to ReCon++ that benefits from multi-viewimage distillation for enhanced geometry understanding. By utilizing ReCon++ asthe 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructedinstruction-following data and tested on our newly human-curated benchmark, 3DMM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3Dgeometry understanding and language-unified 3D interaction tasks, such asembodied visual grounding. Project page: https://qizekun.github.io/shapellm/