Command Palette
Search for a command to run...
Yuan Gong Yu-An Chung James Glass

Abstract
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-classification-on-audioset | AST (Ensemble) | Test mAP: 0.485 |
| audio-classification-on-audioset | AST (Single) | Test mAP: 0.459 |
| audio-classification-on-esc-50 | Audio Spectrogram Transformer | Accuracy (5-fold): 95.7 PRE-TRAINING DATASET: AudioSet, ImageNet Top-1 Accuracy: 95.7 |
| audio-classification-on-speech-commands-1 | AST-S | Accuracy: 98.11±0.05 |
| audio-tagging-on-audioset | Audio Spectrogram Transformer | mean average precision: 0.485 |
| keyword-spotting-on-google-speech-commands | Audio Spectrogram Transformer | Google Speech Commands V2 35: 98.11 |
| speech-emotion-recognition-on-crema-d | ViT | Accuracy: 67.81 |
| time-series-on-speech-commands | ViT | % Test Accuracy: 98.11 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.