HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Continual Transformers: Redundancy-Free Attention for Online Inference

Lukas Hedegaard; Arian Bakhtiarnia; Alexandros Iosifidis

Continual Transformers: Redundancy-Free Attention for Online Inference

Abstract

Transformers in their common form are inherently limited to operate on whole token sequences rather than on one token at a time. Consequently, their use during online inference on time-series data entails considerable redundancy due to the overlap in successive token sequences. In this work, we propose novel formulations of the Scaled Dot-Product Attention, which enable Transformers to perform efficient online token-by-token inference on a continual input stream. Importantly, our modifications are purely to the order of computations, while the outputs and learned weights are identical to those of the original Transformer Encoder. We validate our Continual Transformer Encoder with experiments on the THUMOS14, TVSeries and GTZAN datasets with remarkable results: Our Continual one- and two-block architectures reduce the floating point operations per prediction by up to 63x and 2.6x, respectively, while retaining predictive performance.

Code Repositories

lukashedegaard/continual-transformers
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
online-action-detection-on-thumos-14OadTR-b2
MFLOPs per pred: 1075.7
mAP: 64.5
online-action-detection-on-thumos-14CoOadTR-b1
MFLOPs per pred: 10.6
online-action-detection-on-thumos-14OadTR
MFLOPs per pred: 2513.5
mAP: 64.2
online-action-detection-on-thumos-14OadTR-b1
MFLOPs per pred: 673
mAP: 63.9
online-action-detection-on-thumos-14CoOadTR-b2
MFLOPs per pred: 411.9
mAP: 64.4
online-action-detection-on-tvseriesCoOadTR-b1
mCAP: 87.7
online-action-detection-on-tvseriesOadTR-b2
mCAP: 88.3
online-action-detection-on-tvseriesOadTR-b1
mCAP: 88.1
online-action-detection-on-tvseriesOadTR
mCAP: 88.6
online-action-detection-on-tvseriesCoOadTR-b2
mCAP: 87.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Continual Transformers: Redundancy-Free Attention for Online Inference | Papers | HyperAI