HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Evaluating Large Language Models Trained on Code

Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde de Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman; Alex Ray; Raul Puri; Gretchen Krueger; Michael Petrov; Heidy Khlaaf; Girish Sastry; Pamela Mishkin; Brooke Chan; Scott Gray; Nick Ryder; Mikhail Pavlov; Alethea Power; Lukasz Kaiser; Mohammad Bavarian; Clemens Winter; Philippe Tillet; Felipe Petroski Such; Dave Cummings; Matthias Plappert; Fotios Chantzis; Elizabeth Barnes; Ariel Herbert-Voss; William Hebgen Guss; Alex Nichol; Alex Paino; Nikolas Tezak; Jie Tang; Igor Babuschkin; Suchir Balaji; Shantanu Jain; William Saunders; Christopher Hesse; Andrew N. Carr; Jan Leike; Josh Achiam; Vedant Misra; Evan Morikawa; Alec Radford; Matthew Knight; Miles Brundage; Mira Murati; Katie Mayer; Peter Welinder; Bob McGrew; Dario Amodei; Sam McCandlish; Ilya Sutskever; Wojciech Zaremba

Evaluating Large Language Models Trained on Code

Abstract

We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.

Code Repositories

openai/human-eval
Official
Mentioned in GitHub
codefuse-ai/codefuse-evaluation
pytorch
Mentioned in GitHub
2796gaurav/human-eval
Mentioned in GitHub
ncoop57/gpt-code-clippy
jax
Mentioned in GitHub
superli3/codenavi
tf
Mentioned in GitHub
codedotal/gpt-code-clippy
jax
Mentioned in GitHub
fsoft-ai4code/codecapybara
pytorch
Mentioned in GitHub
glouppe/info8010-deep-learning
pytorch
Mentioned in GitHub
THUDM/CodeGeeX
mindspore
Mentioned in GitHub
vhellendoorn/code-lms
Mentioned in GitHub
superli3/CYRMPR
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-appsCodex 12B (Raw)
Competition Pass@1: 0.50%
Competition Pass@1000: 13.51%
Competition Pass@5: 1.00%
Competition Pass@any: 13.51%
Interview Pass@1: 1.00%
Interview Pass@1000: 13.15%
Interview Pass@5: 1.73%
Interview Pass@any: 13.15%
Introductory Pass@1: 5.60%
Introductory Pass@1000: 35.20%
Introductory Pass@5: 9.20%
Introductory Pass@any: 35.20%
multi-task-language-understanding-on-bbh-algcode-davinci-002 175B (CoT)
Average (%): 73.9
multi-task-language-understanding-on-bbh-nlpcode-davinci-002 175B (CoT)
Average (%): 73.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Evaluating Large Language Models Trained on Code | Papers | HyperAI