HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

Shizhe Diao; Jiaxin Bai; Yan Song; Tong Zhang; Yonggang Wang

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

Abstract

The pre-training of text encoders normally processes text as a sequence of tokens corresponding to small text units, such as word pieces in English and characters in Chinese. It omits information carried by larger text granularity, and thus the encoders cannot easily adapt to certain combinations of characters. This leads to a loss of important semantic information, which is especially problematic for Chinese because the language does not have explicit word boundaries. In this paper, we propose ZEN, a BERT-based Chinese (Z) text encoder Enhanced by N-gram representations, where different combinations of characters are considered during training. As a result, potential word or phase boundaries are explicitly pre-trained and fine-tuned with the character encoder (BERT). Therefore ZEN incorporates the comprehensive information of both the character sequence and words or phrases it contains. Experimental results illustrated the effectiveness of ZEN on a series of Chinese NLP tasks. We show that ZEN, using less resource than other published encoders, can achieve state-of-the-art performance on most tasks. Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data. The code and pre-trained models of ZEN are available at https://github.com/sinovation/zen.

Code Repositories

SVAIGBA/WMSeg
pytorch
Mentioned in GitHub
sinovation/ZEN
Official
pytorch
Mentioned in GitHub
YYGe01/ZEN
pytorch
Mentioned in GitHub
cuhksz-nlp/SAPar
pytorch
Mentioned in GitHub
cuhksz-nlp/mcasp
pytorch
Mentioned in GitHub
SVAIGBA/TwASP
pytorch
Mentioned in GitHub
cuhksz-nlp/het-mc
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
chinese-named-entity-recognition-on-msraZEN (Init with Chinese BERT)
F1: 95.25
chinese-named-entity-recognition-on-msraZEN (Random Init)
F1: 93.24
chinese-word-segmentation-on-msrZEN (Random Init)
F1: 97.89
chinese-word-segmentation-on-msrZEN (Init with Chinese BERT)
F1: 98.35

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations | Papers | HyperAI