HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Work in Progress: Linear Transformers for TinyML

{Luca Benini Michele Magno Cristian Cioflan Moritz Scherer}

Abstract

We present the WaveFormer, a neural network architecture based on a linear attention transformer to enable long sequence inference for TinyML devices. Waveformer achieves a new state-of-the-art accuracy of 98.8 % and 99.1 % on the Google Speech V2 keyword spotting (KWS) dataset for the 12 and 35 class problems with only 130 kB of weight storage, compatible with MCU class devices. Top-1 accuracy is improved by 0.1 and 0.9 percentage points while reducing the model size and number of operations by 2.5× and 4.7× compared to the state of the art. We also propose a hardware-friendly 8-bit integer quantization algorithm for the linear attention operator, enabling efficient deployment on low-cost, ultra-low-power microcontrollers without loss of accuracy.

Benchmarks

BenchmarkMethodologyMetrics
keyword-spotting-on-google-speech-commandsWaveFormer
Google Speech Commands V2 12: 98.8
Google Speech Commands V2 35: 99.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Work in Progress: Linear Transformers for TinyML | Papers | HyperAI