HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Improved Speech Enhancement with the Wave-U-Net

Craig Macartney; Tillman Weyde

Improved Speech Enhancement with the Wave-U-Net

Abstract

We study the use of the Wave-U-Net architecture for speech enhancement, a model introduced by Stoller et al for the separation of music vocals and accompaniment. This end-to-end learning method for audio source separation operates directly in the time domain, permitting the integrated modelling of phase information and being able to take large temporal contexts into account. Our experiments show that the proposed method improves several metrics, namely PESQ, CSIG, CBAK, COVL and SSNR, over the state-of-the-art with respect to the speech enhancement task on the Voice Bank corpus (VCTK) dataset. We find that a reduced number of hidden layers is sufficient for speech enhancement in comparison to the original system designed for singing voice separation in music. We see this initial result as an encouraging signal to further explore speech enhancement in the time-domain, both as an end in itself and as a pre-processing step to speech recognition systems.

Code Repositories

MattSegal/speech-enhancement
pytorch
Mentioned in GitHub
pheepa/DCUnet
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
speech-enhancement-on-demand-1Wave-U-Net
CBAK: 3.24
COVL: 2.96
CSIG: 3.52
PESQ: 2.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Improved Speech Enhancement with the Wave-U-Net | Papers | HyperAI