HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Learning to Encode Position for Transformer with Continuous Dynamical Model

Xuanqing Liu Hsiang-Fu Yu Inderjit Dhillon Cho-Jui Hsieh

Learning to Encode Position for Transformer with Continuous Dynamical Model

Abstract

We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models. Unlike RNN and LSTM, which contain inductive bias by loading the input tokens sequentially, non-recurrent models are less sensitive to position. The main reason is that position information among input units is not inherently encoded, i.e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input. However, this solution has clear limitations: the sinusoidal encoding is not flexible enough as it is manually designed and does not contain any learnable parameters, whereas the position embedding restricts the maximum length of input sequences. It is thus desirable to design a new position layer that contains learnable parameters to adjust to different datasets and different architectures. At the same time, we would also like the encodings to extrapolate in accordance with the variable length of inputs. In our proposed solution, we borrow from the recent Neural ODE approach, which may be viewed as a versatile continuous version of a ResNet. This model is capable of modeling many kinds of dynamical systems. We model the evolution of encoded results along position index by such a dynamical system, thereby overcoming the above limitations of existing methods. We evaluate our new position layers on a variety of neural machine translation and language understanding tasks, the experimental results show consistent improvements over the baselines.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
linguistic-acceptability-on-colaFLOATER-large
Accuracy: 69%
machine-translation-on-wmt2014-english-frenchFLOATER-large
BLEU score: 42.7
Hardware Burden:
Operations per network pass:
machine-translation-on-wmt2014-english-germanFLOATER-large
BLEU score: 29.2
Hardware Burden:
Operations per network pass:
semantic-textual-similarity-on-mrpcFLOATER-large
Accuracy: 91.4%
sentiment-analysis-on-sst-2-binaryFLOATER-large
Accuracy: 96.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learning to Encode Position for Transformer with Continuous Dynamical Model | Papers | HyperAI