HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

LERT: A Linguistically-motivated Pre-trained Language Model

Yiming Cui; Wanxiang Che; Shijin Wang; Ting Liu

LERT: A Linguistically-motivated Pre-trained Language Model

Abstract

Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT

Code Repositories

ymcui/lert
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
stock-market-prediction-on-astockChinese Lert Large (News)
Accuray: 64.37
F1-score: 64.30
Precision: 64.34
Recall: 64.31
stock-market-prediction-on-astockChinese Lert Large (News+Factors)
Accuray: 66.36
F1-score: 66.16
Precision: 66.40
Recall: 66.69

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LERT: A Linguistically-motivated Pre-trained Language Model | Papers | HyperAI