HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Aria: An Open Multimodal Native Mixture-of-Experts Model

Dongxu Li Yudong Liu Haoning Wu Yue Wang Zhiqi Shen Bowen Qu Xinyao Niu Guoyin Wang Bei Chen Junnan Li

Aria: An Open Multimodal Native Mixture-of-Experts Model

Abstract

Information comes in diverse modalities. Multimodal native AI models areessential to integrate real-world information and deliver comprehensiveunderstanding. While proprietary multimodal native models exist, their lack ofopenness imposes obstacles for adoptions, let alone adaptations. To fill thisgap, we introduce Aria, an open multimodal native model with best-in-classperformance across a wide range of multimodal, language, and coding tasks. Ariais a mixture-of-expert model with 3.9B and 3.5B activated parameters per visualtoken and text token, respectively. It outperforms Pixtral-12B andLlama3.2-11B, and is competitive against the best proprietary models on variousmultimodal tasks. We pre-train Aria from scratch following a 4-stage pipeline,which progressively equips the model with strong capabilities in languageunderstanding, multimodal understanding, long context window, and instructionfollowing. We open-source the model weights along with a codebase thatfacilitates easy adoptions and adaptations of Aria in real-world applications.

Code Repositories

rhymes-ai/aria
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-question-answering-on-tvbenchAria
Average Accuracy: 51.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Aria: An Open Multimodal Native Mixture-of-Experts Model | Papers | HyperAI