HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic Speech Detection

Truong Duc-Tuan ; Tao Ruijie ; Nguyen Tuan ; Luong Hieu-Thi ; Lee Kong Aik ; Chng Eng Siong

Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic
  Speech Detection

Abstract

Recent synthetic speech detectors leveraging the Transformer model havesuperior performance compared to the convolutional neural network counterparts.This improvement could be due to the powerful modeling ability of themulti-head self-attention (MHSA) in the Transformer model, which learns thetemporal relationship of each input token. However, artifacts of syntheticspeech can be located in specific regions of both frequency channels andtemporal segments, while MHSA neglects this temporal-channel dependency of theinput sequence. In this work, we proposed a Temporal-Channel Modeling (TCM)module to enhance MHSA's capability for capturing temporal-channeldependencies. Experimental results on the ASVspoof 2021 show that with only0.03M additional parameters, the TCM module can outperform the state-of-the-artsystem by 9.25% in EER. Further ablation study reveals that utilizing bothtemporal and channel information yields the most improvement for detectingsynthetic speech.

Code Repositories

ductuantruong/tcm_add
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
audio-deepfake-detection-on-asvspoof-2021TCM-Add
21DF EER: 2.14
21LA EER: 2.99

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic Speech Detection | Papers | HyperAI