HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

DOCmT5: Document-Level Pretraining of Multilingual Language Models

Chia-Hsuan Lee Aditya Siddhant Viresh Ratnakar Melvin Johnson

DOCmT5: Document-Level Pretraining of Multilingual Language Models

Abstract

In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large scale parallel documents. While previous approaches have focused on leveraging sentence-level parallel data, we try to build a general-purpose pretrained model that can understand and generate long documents. We propose a simple and effective pretraining objective - Document reordering Machine Translation (DrMT), in which the input documents that are shuffled and masked need to be translated. DrMT brings consistent improvements over strong baselines on a variety of document-level generation tasks, including over 12 BLEU points for seen-language-pair document-level MT, over 7 BLEU points for unseen-language-pair document-level MT and over 3 ROUGE-1 points for seen-language-pair cross-lingual summarization. We achieve state-of-the-art (SOTA) on WMT20 De-En and IWSLT15 Zh-En document translation tasks. We also conduct extensive analysis on various factors for document pretraining, including (1) The effects of pretraining data quality and (2) The effects of combining mono-lingual and cross-lingual pretraining. We plan to make our model checkpoints publicly available.

Benchmarks

BenchmarkMethodologyMetrics
document-summarization-on-wikilingua-tr-enDOCmT5
Rouge-L: 31.37

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
DOCmT5: Document-Level Pretraining of Multilingual Language Models | Papers | HyperAI