HyperAIHyperAI

Command Palette

Search for a command to run...

a month ago

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your
  Phone

Abstract

We introduce phi-3-mini, a 3.8 billion parameter language model trained on3.3 trillion tokens, whose overall performance, as measured by both academicbenchmarks and internal testing, rivals that of models such as Mixtral 8x7B andGPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despitebeing small enough to be deployed on a phone. The innovation lies entirely inour dataset for training, a scaled-up version of the one used for phi-2,composed of heavily filtered web data and synthetic data. The model is alsofurther aligned for robustness, safety, and chat format. We also provide someinitial parameter-scaling results with a 7B and 14B models trained for 4.8Ttokens, called phi-3-small and phi-3-medium, both significantly more capablethan phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 onMT-bench).

Benchmarks

BenchmarkMethodologyMetrics
mmr-total-on-mrr-benchmarkPhi-3-Vision
Total Column Score: 397

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone | Papers | HyperAI