HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

FlauBERT: Unsupervised Language Model Pre-training for French

Hang Le Loïc Vial Jibril Frej Vincent Segonne Maximin Coavoux Benjamin Lecouteux Alexandre Allauzen Benoît Crabbé Laurent Besacier Didier Schwab

FlauBERT: Unsupervised Language Model Pre-training for French

Abstract

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.

Code Repositories

bencrabbe/npdependency
pytorch
Mentioned in GitHub
getalp/disambiguate
pytorch
Mentioned in GitHub
ialifinaritra/text_summarization
pytorch
Mentioned in GitHub
huggingface/transformers
pytorch
Mentioned in GitHub
getalp/Flaubert
Official
pytorch
Mentioned in GitHub
bourrel/French-News-Clustering
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
natural-language-inference-on-xnli-frenchFlauBERT (large)
Accuracy: 83.4
natural-language-inference-on-xnli-frenchFlauBERT (base)
Accuracy: 80.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
FlauBERT: Unsupervised Language Model Pre-training for French | Papers | HyperAI