HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Speech Enhancement and Dereverberation with Diffusion-based Generative Models

Julius Richter; Simon Welker; Jean-Marie Lemercier; Bunlong Lay; Timo Gerkmann

Speech Enhancement and Dereverberation with Diffusion-based Generative Models

Abstract

In this work, we build upon our previous publication and use diffusion-based generative models for speech enhancement. We present a detailed overview of the diffusion process that is based on a stochastic differential equation and delve into an extensive theoretical examination of its implications. Opposed to usual conditional generation tasks, we do not start the reverse process from pure Gaussian noise but from a mixture of noisy speech and Gaussian noise. This matches our forward process which moves from clean speech to noisy speech by including a drift term. We show that this procedure enables using only 30 diffusion steps to generate high-quality clean speech estimates. By adapting the network architecture, we are able to significantly improve the speech enhancement performance, indicating that the network, rather than the formalism, was the main limitation of our original approach. In an extensive cross-dataset evaluation, we show that the improved method can compete with recent discriminative models and achieves better generalization when evaluating on a different corpus than used for training. We complement the results with an instrumental evaluation using real-world noisy recordings and a listening experiment, in which our proposed method is rated best. Examining different sampler configurations for solving the reverse process allows us to balance the performance and computational speed of the proposed method. Moreover, we show that the proposed method is also suitable for dereverberation and thus not limited to additive background noise removal. Code and audio examples are available online, see https://github.com/sp-uhh/sgmse

Code Repositories

sp-uhh/sgmse
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
speech-dereverberation-on-ears-reverbSGMSE+
ESTOI: 0.85
MOS Reverb: 4.73
PESQ-WB: 3.03
SI-SDR: 5.79
SIGMOS: 3.49
speech-enhancement-on-demandSGMSE+ (Diffusion Model)
PESQ (wb): 2.93
speech-enhancement-on-ears-whamSGMSE+
DNSMOS: 3.88
ESTOI: 0.73
PESQ-WB: 2.50
POLQA: 3.40
SI-SDR: 16.78
SIGMOS: 3.41

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Speech Enhancement and Dereverberation with Diffusion-based Generative Models | Papers | HyperAI