HyperAIHyperAI

Command Palette

Search for a command to run...

a month ago

Qwen2.5 Technical Report

Qwen2.5 Technical Report

Abstract

In this report, we introduce Qwen2.5, a comprehensive series of largelanguage models (LLMs) designed to meet diverse needs. Compared to previousiterations, Qwen 2.5 has been significantly improved during both thepre-training and post-training stages. In terms of pre-training, we have scaledthe high-quality pre-training datasets from the previous 7 trillion tokens to18 trillion tokens. This provides a strong foundation for common sense, expertknowledge, and reasoning capabilities. In terms of post-training, we implementintricate supervised finetuning with over 1 million samples, as well asmultistage reinforcement learning. Post-training techniques enhance humanpreference, and notably improve long text generation, structural data analysis,and instruction following. To handle diverse and varied use cases effectively,we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include baseand instruction-tuned models, with quantized versions available. In addition,for hosted solutions, the proprietary models currently include twomixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, bothavailable from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tierperformance on a wide range of benchmarks evaluating language understanding,reasoning, mathematics, coding, human preference alignment, etc. Specifically,the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open andproprietary models and demonstrates competitive performance to thestate-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectivenesswhile performing competitively against GPT-4o-mini and GPT-4o respectively.Additionally, as the foundation, Qwen2.5 models have been instrumental intraining specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, andmultimodal models.

Code Repositories

baichuan-inc/Baichuan-Omni-1.5
pytorch
Mentioned in GitHub
baichuan-inc/baichuan-audio
pytorch
Mentioned in GitHub
funaudiollm/inspiremusic
pytorch
Mentioned in GitHub
qwenlm/qwen2
pytorch
Mentioned in GitHub
qwenlm/qwen1.5
pytorch
Mentioned in GitHub
qwenlm/qwen2.5
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
mathematical-reasoning-on-aime24Qwen2.5-72B-Instruct
Acc: 23.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Qwen2.5 Technical Report | Papers | HyperAI