HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Entailment as Few-Shot Learner

Sinong Wang; Han Fang; Madian Khabsa; Hanzi Mao; Hao Ma

Entailment as Few-Shot Learner

Abstract

Large pre-trained language models (LMs) have demonstrated remarkable ability as few-shot learners. However, their success hinges largely on scaling model parameters to a degree that makes it challenging to train and serve. In this paper, we propose a new approach, named as EFL, that can turn small LMs into better few-shot learners. The key idea of this approach is to reformulate potential NLP task into an entailment one, and then fine-tune the model with as little as 8 examples. We further demonstrate our proposed method can be: (i) naturally combined with an unsupervised contrastive learning-based data augmentation method; (ii) easily extended to multilingual few-shot learning. A systematic evaluation on 18 standard NLP tasks demonstrates that this approach improves the various existing SOTA few-shot learning methods by 12\%, and yields competitive few-shot performance with 500 times larger models, such as GPT-3.

Benchmarks

BenchmarkMethodologyMetrics
linguistic-acceptability-on-colaRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 86.4%
natural-language-inference-on-qnliRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 94.5%
natural-language-inference-on-rteRoBERTa-large 355M + EFL + UCA
Accuracy: 87.2%
natural-language-inference-on-rteRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 90.5%
natural-language-inference-on-snliNeural Tree Indexers for Text Understanding
% Test Accuracy: 93.1
Parameters: 355
natural-language-inference-on-snliEFL (Entailment as Few-shot Learner) + RoBERTa-large
% Test Accuracy: 93.1
% Train Accuracy: ?
Parameters: 355m
paraphrase-identification-on-quora-questionRoBERTa-large 355M + Entailment as Few-shot Learner
F1: 89.2
question-answering-on-boolqRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 86.0
semantic-textual-similarity-on-mrpcRoBERTa-large 355M + Entailment as Few-shot Learner
F1: 91.0
semantic-textual-similarity-on-sts-benchmarkRoBERTa-large 355M + Entailment as Few-shot Learner
Pearson Correlation: 0.918
sentiment-analysis-on-crRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 92.5
sentiment-analysis-on-imdbRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 96.1
sentiment-analysis-on-mpqaRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 90.8
sentiment-analysis-on-mrRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 92.5
sentiment-analysis-on-sst-2-binaryRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 96.9
subjectivity-analysis-on-subjRoBERTa-large 355M + Entailment as Few-shot Learner
Accuracy: 97.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Entailment as Few-Shot Learner | Papers | HyperAI