Command Palette
Search for a command to run...
Dongxu Li Yudong Liu Haoning Wu Yue Wang Zhiqi Shen Bowen Qu Xinyao Niu Guoyin Wang Bei Chen Junnan Li

Abstract
Information comes in diverse modalities. Multimodal native AI models areessential to integrate real-world information and deliver comprehensiveunderstanding. While proprietary multimodal native models exist, their lack ofopenness imposes obstacles for adoptions, let alone adaptations. To fill thisgap, we introduce Aria, an open multimodal native model with best-in-classperformance across a wide range of multimodal, language, and coding tasks. Ariais a mixture-of-expert model with 3.9B and 3.5B activated parameters per visualtoken and text token, respectively. It outperforms Pixtral-12B andLlama3.2-11B, and is competitive against the best proprietary models on variousmultimodal tasks. We pre-train Aria from scratch following a 4-stage pipeline,which progressively equips the model with strong capabilities in languageunderstanding, multimodal understanding, long context window, and instructionfollowing. We open-source the model weights along with a codebase thatfacilitates easy adoptions and adaptations of Aria in real-world applications.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-question-answering-on-tvbench | Aria | Average Accuracy: 51.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.