
摘要
随着大规模预训练模型在自然语言处理(NLP)中的应用越来越广泛,这些大型模型在边缘设备上运行或在计算资源受限的情况下进行训练和推理仍然面临挑战。本文提出了一种方法,用于预训练一个较小的通用语言表示模型,称为DistilBERT,该模型可以在多种任务上进行微调并取得与其较大同类模型相当的良好性能。尽管先前的大多数研究集中在利用蒸馏技术构建特定任务的模型,我们在预训练阶段引入了知识蒸馏,并展示了可以将BERT模型的大小减少40%,同时保留其97%的语言理解能力,并且速度提高60%。为了利用大型模型在预训练过程中学到的归纳偏置,我们引入了一种结合语言建模、蒸馏和余弦距离损失的三重损失函数。我们的小型、快速且轻量级模型预训练成本更低,并通过概念验证实验和设备上的对比研究展示了其在设备端计算中的能力。
代码仓库
philschmid/knowledge-distillation-transformers-pytorch-sagemaker
pytorch
GitHub 中提及
stefan-it/europeana-bert
tf
GitHub 中提及
Karthik-Bhaskar/Context-Based-Question-Answering
tf
GitHub 中提及
msorkhpar/wiki-entity-summarization-preprocessor
pytorch
GitHub 中提及
reycn/multi-modal-scale
pytorch
GitHub 中提及
knuddj1/op_text
pytorch
GitHub 中提及
flexible-fl/flex-nlp
GitHub 中提及
askaydevs/distillbert-qa
pytorch
GitHub 中提及
Milan-Chicago/GLG-Automated-Meta-data-Tagging
tf
GitHub 中提及
dngback/co-forget-protocol
GitHub 中提及
lukexyz/Deep-Lyrical-Genius
pytorch
GitHub 中提及
jaketae/pytorch-malware-detection
pytorch
GitHub 中提及
knuddy/op_text
pytorch
GitHub 中提及
sdadas/polish-roberta
pytorch
GitHub 中提及
tchebonenko/Automated-Topic_Modeling-and-NER
tf
GitHub 中提及
monologg/distilkobert
pytorch
GitHub 中提及
huggingface/transformers
官方
pytorch
GitHub 中提及
facebookresearch/EgoTV
pytorch
GitHub 中提及
huggingface/tflite-android-transformers
tf
GitHub 中提及
huggingface/node-question-answering
tf
GitHub 中提及
frankaging/Causal-Distill
pytorch
GitHub 中提及
allenai/scifact
pytorch
GitHub 中提及
franknb/Text-Summarization
GitHub 中提及
epfml/collaborative-attention
pytorch
GitHub 中提及
twobooks/intro-aws-training
pytorch
GitHub 中提及
mkavim/finetune_bert
tf
GitHub 中提及
suinleelab/path_explain
tf
GitHub 中提及
semantic-web-company/ptlm_wsid
GitHub 中提及
ayeffkay/rubert-tiny
pytorch
GitHub 中提及
nageshsinghc4/deepwrap
tf
GitHub 中提及
enzomuschik/distilfnd
pytorch
GitHub 中提及
huggingface/swift-coreml-transformers
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| linguistic-acceptability-on-cola | DistilBERT 66M | Accuracy: 49.1% |
| natural-language-inference-on-qnli | DistilBERT 66M | Accuracy: 90.2% |
| natural-language-inference-on-rte | DistilBERT 66M | Accuracy: 62.9% |
| natural-language-inference-on-wnli | DistilBERT 66M | Accuracy: 44.4 |
| question-answering-on-multitq | DistillBERT | Hits@1: 8.3 Hits@10: 48.4 |
| question-answering-on-quora-question-pairs | DistilBERT 66M | Accuracy: 89.2% |
| question-answering-on-squad11-dev | DistilBERT 66M | F1: 85.8 |
| question-answering-on-squad11-dev | DistilBERT | EM: 77.7 |
| semantic-textual-similarity-on-mrpc | DistilBERT 66M | Accuracy: 90.2% |
| semantic-textual-similarity-on-sts-benchmark | DistilBERT 66M | Pearson Correlation: 0.907 |
| sentiment-analysis-on-imdb | DistilBERT 66M | Accuracy: 92.82 |
| sentiment-analysis-on-sst-2-binary | DistilBERT 66M | Accuracy: 91.3 |
| task-1-grouping-on-ocw | DistilBERT (BASE) | # Correct Groups: 49 ± 4 # Solved Walls: 0 ± 0 Adjusted Mutual Information (AMI): 14.0 ± .3 Adjusted Rand Index (ARI): 11.3 ± .3 Fowlkes Mallows Score (FMS): 29.1 ± .2 Wasserstein Distance (WD): 86.7 ± .6 |