HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Teaching Large Language Models to Self-Debug

Xinyun Chen Maxwell Lin Nathanael Schärli Denny Zhou

Teaching Large Language Models to Self-Debug

Abstract

Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to improve code generation performance. In this work, we propose Self-Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations. In particular, we demonstrate that Self-Debugging can teach the large language model to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language. Self-Debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, Self-Debugging with code explanation consistently improves the baseline by 2-3%, and improves the prediction accuracy on problems of the hardest level by 9%. On TransCoder and MBPP where unit tests are available, Self-Debugging improves the baseline accuracy by up to 12%. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.

Code Repositories

amazon-science/SDFeedback
Mentioned in GitHub
amazon-science/self_debug
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-mbppStarCoder 15.5B (Self-Debugging with unit tests + trace)
Accuracy: 53.2
code-generation-on-mbppcode-davinci-002 175B (Self-Debugging with unit tests + trace)
Accuracy: 70.8
code-generation-on-mbppcode-davinci-002 175B (3-shot)
Accuracy: 61.4
code-generation-on-mbppGPT-3.5 Turbo (Self-Debugging with unit tests + trace)
Accuracy: 72.8
code-generation-on-mbppGPT-3.5 Turbo (3-shot)
Accuracy: 67.6
code-generation-on-mbppGPT-4 (Self-Debugging with unit tests + trace)
Accuracy: 80.2
code-generation-on-mbppStarCoder 15.5B (3-shot)
Accuracy: 47.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Teaching Large Language Models to Self-Debug | Papers | HyperAI