Command Palette
Search for a command to run...
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition
Guimin Hu Ting-En Lin Yi Zhao Guangming Lu Yuchuan Wu Yongbin Li

Abstract
Multimodal sentiment analysis (MSA) and emotion recognition in conversation (ERC) are key research topics for computers to understand human behaviors. From a psychological perspective, emotions are the expression of affect or feelings during a short period, while sentiments are formed and held for a longer period. However, most existing works study sentiment and emotion separately and do not fully exploit the complementary knowledge behind the two. In this paper, we propose a multimodal sentiment knowledge-sharing framework (UniMSE) that unifies MSA and ERC tasks from features, labels, and models. We perform modality fusion at the syntactic and semantic levels and introduce contrastive learning between modalities and samples to better capture the difference and consistency between sentiments and emotions. Experiments on four public benchmark datasets, MOSI, MOSEI, MELD, and IEMOCAP, demonstrate the effectiveness of the proposed method and achieve consistent improvements compared with state-of-the-art methods.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| emotion-recognition-in-conversation-on | UniMSE | Accuracy: 70.56 Weighted-F1: 70.66 |
| emotion-recognition-in-conversation-on-meld | UniMSE | Accuracy: 65.09 Weighted-F1: 65.51 |
| multimodal-sentiment-analysis-on-cmu-mosei-1 | UniMSE | Accuracy: 87.50 F1: 87.46 MAE: 0.523 |
| multimodal-sentiment-analysis-on-cmu-mosi | UniMSE | Acc-2: 86.9 Acc-7: 48.68 Corr: 0.809 F1: 86.42 MAE: 0.691 |
| multimodal-sentiment-analysis-on-mosi | UniMSE | Accuracy: 86.9 F1 score: 86.42 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.