HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

Zihang Dai Guokun Lai Yiming Yang Quoc V. Le

Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

Abstract

With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension. The code and pretrained checkpoints are available at https://github.com/laiguokun/Funnel-Transformer.

Code Repositories

huggingface/transformers
pytorch
Mentioned in GitHub
laiguokun/Funnel-Transformer
Official
tf
Mentioned in GitHub
chfhf/funnel-paddle
paddle
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
reading-comprehension-on-raceB10-10-10
Accuracy: 85.7
Accuracy (High): 84.4
Accuracy (Middle): 88.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing | Papers | HyperAI