HyperAIHyperAI

Command Palette

Search for a command to run...

2 months ago

A Survey on Large Language Model Benchmarks

A Survey on Large Language Model Benchmarks

Abstract

In recent years, with the rapid development of the depth and breadth of largelanguage models' capabilities, various corresponding evaluation benchmarks havebeen emerging in increasing numbers. As a quantitative assessment tool formodel performance, benchmarks are not only a core means to measure modelcapabilities but also a key element in guiding the direction of modeldevelopment and promoting technological innovation. We systematically reviewthe current status and development of large language model benchmarks for thefirst time, categorizing 283 representative benchmarks into three categories:general capabilities, domain-specific, and target-specific. General capabilitybenchmarks cover aspects such as core linguistics, knowledge, and reasoning;domain-specific benchmarks focus on fields like natural sciences, humanitiesand social sciences, and engineering technology; target-specific benchmarks payattention to risks, reliability, agents, etc. We point out that currentbenchmarks have problems such as inflated scores caused by data contamination,unfair evaluation due to cultural and linguistic biases, and lack of evaluationon process credibility and dynamic environments, and provide a referable designparadigm for future benchmark innovation.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
A Survey on Large Language Model Benchmarks | Papers | HyperAI