Command Palette
Search for a command to run...
VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding
Dou Hu Xiaolong Hou Xiyang Du Mengyuan Zhou Lianxin Jiang Yang Mo Xiaofeng Shi

Abstract
Pre-trained language models have achieved promising performance on general benchmarks, but underperform when migrated to a specific domain. Recent works perform pre-training from scratch or continual pre-training on domain corpora. However, in many specific domains, the limited corpus can hardly support obtaining precise representations. To address this issue, we propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding. Under the masked autoencoding objective, we design a context uncertainty learning module to encode the token's context into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. Experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| citation-intent-classification-on-acl-arc | VarMAE | Macro-F1: Not reported Micro-F1: 76.50 |
| participant-intervention-comparison-outcome | VarMAE | F1: 76.01 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.