Command Palette
Search for a command to run...
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations
Dou Hu Xiaolong Hou Lingwei Wei Lianxin Jiang Yang Mo

Abstract
Emotion Recognition in Conversations (ERC) has considerable prospects for developing empathetic machines. For multimodal ERC, it is vital to understand context and fuse modality information in conversations. Recent graph-based fusion methods generally aggregate multimodal information by exploring unimodal and cross-modal interactions in a graph. However, they accumulate redundant information at each layer, limiting the context understanding between modalities. In this paper, we propose a novel Multimodal Dynamic Fusion Network (MM-DFN) to recognize emotions by fully understanding multimodal conversational context. Specifically, we design a new graph-based dynamic fusion module to fuse multimodal contextual features in a conversation. The module reduces redundancy and enhances complementarity between modalities by capturing the dynamics of contextual information in different semantic spaces. Extensive experiments on two public benchmark datasets demonstrate the effectiveness and superiority of MM-DFN.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| emotion-recognition-in-conversation-on | MM-DFN | Accuracy: 68.21 Weighted-F1: 68.18 |
| emotion-recognition-in-conversation-on-7 | MM-DFN | Accuracy: 80.91 Weighted F1: 80.83 |
| emotion-recognition-in-conversation-on-cmu-2 | MM-DFN | Accuracy: 45.29 Weighted F1: 42.98 |
| emotion-recognition-in-conversation-on-meld | MM-DFN | Accuracy: 62.49 Weighted-F1: 59.46 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.