HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Fine-Tuning Large Language Models for Answering Programming Questions with Code Snippets

{Artem Aliev Sergey Nikolenko Maxim Omelchenko Sergey Kovalchuk Vadim Lomshakov}

Abstract

We study the ability of pretrained large language models (LLM) to answer questions from online question answering fora such as Stack Overflow. We consider question-answer pairs where the main part of the answer consists of source code. On two benchmark datasets—CoNaLa and a newly collected dataset based on Stack Overflow—we investigate how a closed-book question answering system can be improved by fine-tuning the LLM for the downstream task, prompt engineering, and data preprocessing. We use publicly available autoregressive language models such as GPT-Neo, CodeGen, and PanGu-Coder, and after the proposed fine-tuning achieve a BLEU score of 0.4432 on the CoNaLa test set, significantly exceeding previous state of the art for this task.

Benchmarks

BenchmarkMethodologyMetrics
code-generation-on-conalaPanGu-Coder-FT-I
BLEU: 44.32

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Fine-Tuning Large Language Models for Answering Programming Questions with Code Snippets | Papers | HyperAI