HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale

Beck LaBash August Rosedale Alex Reents Lucas Negritto Colin Wiel

RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale

Abstract

The instruction-following ability of Large Language Models (LLMs) has cultivated a class of LLM-based systems capable of approaching complex tasks such as making edits to large code repositories. Due to the high sensitivity and unpredictability of LLM behavior in response to changes in prompting, robust evaluation tools are needed to drive future iteration of these systems. We propose RES-Q, a natural language instruction-based benchmark for evaluating $\textbf{R}$epository $\textbf{E}$diting $\textbf{S}$ystems, which consists of 100 handcrafted repository editing tasks derived from real GitHub commits. Given an edit instruction and a code repository, RES-Q evaluates an LLM system's ability to interpret the instruction, navigate the repository to gather relevant information, and construct an appropriate edit that satisfies the specified criteria. We argue that evaluating LLMs in this way addresses issues with traditional benchmarks and provides a more holistic assessment of a model's abilities. We evaluate various state-of-the-art LLMs as language agents in a repository-editing system built on Qurrent OS, our language agent development software. Despite their 1% pass@1 performance difference on HumanEval, we find Claude Sonnet 3.5 outperforms GPT-4o by 12% pass@1 on RES-Q, indicating RES-Q's capacity to differentiate model capability as traditional benchmarks approach saturation. We further investigate token efficiency, performance relationships with existing benchmarks, and interesting disparities between closed and open-source LLMs. Code and dataset are available at https://github.com/Qurrent-AI/RES-Q.

Code Repositories

qurrent-ai/res-q
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-res-qQurrentOS-coder + Gemini 1.5 Pro
pass@1: 30.0
code-generation-on-res-qQurrentOS-coder + Claude 3.5 Sonnet
pass@1: 58.0
code-generation-on-res-qQurrentOS-coder + Llama 3 70b
pass@1: 20.0
code-generation-on-res-qQurrentOS-coder + Qwen-72B-Instruct
pass@1: 18.0
code-generation-on-res-qQurrentOS-coder + GPT-4
pass@1: 30.0
code-generation-on-res-qQurrentOS-coder + Claude 3 Opus
pass@1: 36.0
code-generation-on-res-qQurrentOS-coder + GPT-4o
pass@1: 46.0
code-generation-on-res-qQurrentOS-coder + DeepSeek-Coder-V2
pass@1: 29.0
code-generation-on-res-qQurrentOS-coder + GPT-4 Turbo
pass@1: 37.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp