Natural Language Visual Grounding On

评估指标

Accuracy (%)

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
UGround-V1-7B86.34Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
Aguvis-7B83.0Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction
OS-Atlas-Base-7B82.47OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
Aria-UI81.1Aria-UI: Visual Grounding for GUI Instructions
Aguvis-G-7B81.0Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction
UGround-V1-2B77.67Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
ShowUI75.1ShowUI: One Vision-Language-Action Model for GUI Visual Agent
ShowUI-G75.0ShowUI: One Vision-Language-Action Model for GUI Visual Agent
UGround73.3Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
OmniParser73.0OmniParser for Pure Vision Based GUI Agent
OS-Atlas-Base-4B68.0OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
SeeClick53.4SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents
CogAgent47.4CogAgent: A Visual Language Model for GUI Agents
Qwen2-VL-7B42.1Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution
Qwen-GUI28.6GUICourse: From General Vision Language Models to Versatile GUI Agents
MiniGPT-v25.7MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Groma5.2Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Qwen-VL5.2Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
0 of 18 row(s) selected.
Natural Language Visual Grounding On | SOTA | HyperAI超神经