HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation

Shengkui Zhao Yukun Ma Chongjia Ni Chong Zhang Hao Wang Trung Hieu Nguyen Kun Zhou Jiaqi Yip Dianwen Ng Bin Ma

MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation

Abstract

Our previously proposed MossFormer has achieved promising performance in monaural speech separation. However, it predominantly adopts a self-attention-based MossFormer module, which tends to emphasize longer-range, coarser-scale dependencies, with a deficiency in effectively modelling finer-scale recurrent patterns. In this paper, we introduce a novel hybrid model that provides the capabilities to model both long-range, coarse-scale dependencies and fine-scale recurrent patterns by integrating a recurrent module into the MossFormer framework. Instead of applying the recurrent neural networks (RNNs) that use traditional recurrent connections, we present a recurrent module based on a feedforward sequential memory network (FSMN), which is considered "RNN-free" recurrent network due to the ability to capture recurrent patterns without using recurrent connections. Our recurrent module mainly comprises an enhanced dilated FSMN block by using gated convolutional units (GCU) and dense connections. In addition, a bottleneck layer and an output layer are also added for controlling information flow. The recurrent module relies on linear projections and convolutions for seamless, parallel processing of the entire sequence. The integrated MossFormer2 hybrid model demonstrates remarkable enhancements over MossFormer and surpasses other state-of-the-art methods in WSJ0-2/3mix, Libri2Mix, and WHAM!/WHAMR! benchmarks (https://github.com/modelscope/ClearerVoice-Studio).

Code Repositories

alibabasglab/MossFormer2
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
speech-separation-on-libri2mixMossFormer2 (w/o DM)
SI-SDRi: 21.7
speech-separation-on-libri2mixMossFormer2 (w speed perturb)
SI-SDRi: 22.2
speech-separation-on-whamMossFormer2
SI-SDRi: 18.1
speech-separation-on-whamrMossFormer2
SI-SDRi: 17.0
speech-separation-on-wsj0-2mixMossFormer2 (L)
Number of parameters (M): 55.7
SI-SDRi: 24.1
speech-separation-on-wsj0-3mixMossFormer2
SI-SDRi: 22.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MossFormer2: Combining Transformer and RNN-Free Recurrent Network for Enhanced Time-Domain Monaural Speech Separation | Papers | HyperAI