HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement

Junyu Wang Zizhen Lin Tianrui Wang Meng Ge Longbiao Wang Jianwu Dang

Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement

Abstract

In recent speech enhancement (SE) research, transformer and its variants have emerged as the predominant methodologies. However, the quadratic complexity of the self-attention mechanism imposes certain limitations on practical deployment. Mamba, as a novel state-space model (SSM), has gained widespread application in natural language processing and computer vision due to its strong capabilities in modeling long sequences and relatively low computational complexity. In this work, we introduce Mamba-SEUNet, an innovative architecture that integrates Mamba with U-Net for SE tasks. By leveraging bidirectional Mamba to model forward and backward dependencies of speech signals at different resolutions, and incorporating skip connections to capture multi-scale information, our approach achieves state-of-the-art (SOTA) performance. Experimental results on the VCTK+DEMAND dataset indicate that Mamba-SEUNet attains a PESQ score of 3.59, while maintaining low computational complexity. When combined with the Perceptual Contrast Stretching technique, Mamba-SEUNet further improves the PESQ score to 3.73.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
speech-enhancement-on-demandMamba-SEUNet L (+PCS)
CBAK: 3.67
COVL: 4.40
CSIG: 4.82
PESQ (wb): 3.73
Para. (M): 6.28
STOI: 96

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement | Papers | HyperAI