HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Temporal Knowledge Distillation for On-device Audio Classification

Kwanghee Choi Martin Kersner Jacob Morton Buru Chang

Temporal Knowledge Distillation for On-device Audio Classification

Abstract

Improving the performance of on-device audio classification models remains a challenge given the computational limits of the mobile environment. Many studies leverage knowledge distillation to boost predictive performance by transferring the knowledge from large models to on-device models. However, most lack a mechanism to distill the essence of the temporal information, which is crucial to audio classification tasks, or similar architecture is often required. In this paper, we propose a new knowledge distillation method designed to incorporate the temporal knowledge embedded in attention weights of large transformer-based models into on-device models. Our distillation method is applicable to various types of architectures, including the non-attention-based architectures such as CNNs or RNNs, while retaining the original network architecture during inference. Through extensive experiments on both an audio event detection dataset and a noisy keyword spotting dataset, we show that our proposed method improves the predictive performance across diverse on-device architectures.

Benchmarks

BenchmarkMethodologyMetrics
audio-classification-on-fsd50kTemporal Knowledge Distillation for On-device Audio Classification
mAP: 54.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Temporal Knowledge Distillation for On-device Audio Classification | Papers | HyperAI