HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion

Yuntao Shou; Tao Meng; Fuchen Zhang; Nan Yin; Keqin Li

Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion

Abstract

Multi-modal Emotion Recognition in Conversation (MERC) has received considerable attention in various fields, e.g., human-computer interaction and recommendation systems. Most existing works perform feature disentanglement and fusion to extract emotional contextual information from multi-modal features and emotion classification. After revisiting the characteristic of MERC, we argue that long-range contextual semantic information should be extracted in the feature disentanglement stage and the inter-modal semantic information consistency should be maximized in the feature fusion stage. Inspired by recent State Space Models (SSMs), Mamba can efficiently model long-distance dependencies. Therefore, in this work, we fully consider the above insights to further improve the performance of MERC. Specifically, on the one hand, in the feature disentanglement stage, we propose a Broad Mamba, which does not rely on a self-attention mechanism for sequence modeling, but uses state space models to compress emotional representation, and utilizes broad learning systems to explore the potential data distribution in broad space. Different from previous SSMs, we design a bidirectional SSM convolution to extract global context information. On the other hand, we design a multi-modal fusion strategy based on probability guidance to maximize the consistency of information between modalities. Experimental results show that the proposed method can overcome the computational and memory limitations of Transformer when modeling long-distance contexts, and has great potential to become a next-generation general architecture in MERC.

Benchmarks

BenchmarkMethodologyMetrics
emotion-recognition-in-conversation-onMamba-like Model
Accuracy: 73.1
Weighted-F1: 73.3
emotion-recognition-in-conversation-on-meldMamba-like Model
Accuracy: 68.0
Weighted-F1: 67.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion | Papers | HyperAI