HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

Zhengxiao Du Yujie Qian Xiao Liu Ming Ding Jiezhong Qiu Zhilin Yang Jie Tang

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

Abstract

There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

Code Repositories

thudm/chatglm2-6b
pytorch
Mentioned in GitHub
thudm/chatglm
pytorch
Mentioned in GitHub
thudm/visualglm-6b
pytorch
Mentioned in GitHub
THUDM/GLM
Official
pytorch
Mentioned in GitHub
thudm/swissarmytransformer
pytorch
Mentioned in GitHub
BBuf/GLM
pytorch
Mentioned in GitHub
thudm/chatglm3
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
abstractive-text-summarization-on-cnn-dailyGLM-XXLarge
ROUGE-1: 44.7
ROUGE-2: 21.4
ROUGE-L: 41.4
document-summarization-on-cnn-daily-mailGLM-XXLarge
ROUGE-1: 44.7
ROUGE-2: 21.4
ROUGE-L: 41.4
language-modelling-on-lambadaGLM-XXLarge (bidirectional)
Accuracy: 72.35
language-modelling-on-lambadaGLM-XXLarge (unidirectional)
Accuracy: 67.18
language-modelling-on-wikitext-103GLM-XXLarge (unidirectional)
Number of params: 10000M
Test perplexity: 12.22
language-modelling-on-wikitext-103GLM-XXLarge (bidirectional)
Number of params: 10000M
Test perplexity: 11.33

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
GLM: General Language Model Pretraining with Autoregressive Blank Infilling | Papers | HyperAI