HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Fine-tune BERT for Extractive Summarization

Yang Liu

Fine-tune BERT for Extractive Summarization

Abstract

BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at https://github.com/nlpyang/BertSum

Code Repositories

johnnyb1509/2020_VBDI_DL
tf
Mentioned in GitHub
nguyenphamvan/BertSum-master
pytorch
Mentioned in GitHub
vsubramaniam851/typ_embed
pytorch
Mentioned in GitHub
nakhunchumpolsathien/TR-TPBS
Mentioned in GitHub
aikawasho/BertSum
pytorch
Mentioned in GitHub
thangarani/bertsum
pytorch
Mentioned in GitHub
raqoon886/KoBertSum
pytorch
Mentioned in GitHub
raqoon886/KorBertSum
pytorch
Mentioned in GitHub
nlpyang/BertSum
Official
pytorch
Mentioned in GitHub
HHousen/TransformerSum
pytorch
Mentioned in GitHub
TidalPaladin/neural-summarizer
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
document-summarization-on-cnn-daily-mailBERTSUM+Transformer
ROUGE-1: 43.25
ROUGE-2: 20.24
ROUGE-L: 39.63

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Fine-tune BERT for Extractive Summarization | Papers | HyperAI