HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Cloze-driven Pretraining of Self-attention Networks

Alexei Baevski; Sergey Edunov; Yinhan Liu; Luke Zettlemoyer; Michael Auli

Cloze-driven Pretraining of Self-attention Networks

Abstract

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.

Benchmarks

BenchmarkMethodologyMetrics
constituency-parsing-on-penn-treebankCNN Large + fine-tune
F1 score: 95.6
named-entity-recognition-ner-on-conll-2003CNN Large + fine-tune
F1: 93.5
sentiment-analysis-on-sst-2-binaryCNN Large
Accuracy: 94.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp