Multimodal Emotion Recognition On Iemocap 4
评估指标
Accuracy
F1
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | |||
|---|---|---|---|---|
| GraphSmile | 86.53 | - | Tracing Intricate Cues in Dialogue: Joint Graph Structure and Sentiment Dynamics for Multimodal Emotion Recognition | |
| DANN | 82.7 | - | Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition | - |
| MMER | 81.7 | - | MMER: Multimodal Multi-task Learning for Speech Emotion Recognition | |
| PATHOSnet v2 | 80.4 | 78 | Combining deep and unsupervised features for multilingual speech emotion recognition | - |
| Self-attention weight correction (A+T) | 76.8 | 76.85 | Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features | - |
| CHFusion | 76.5 | 76.8 | Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling | |
| Audio + Text (Stage III) | - | 70.5 | HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition | - |
| MultiMAE-DER | - | - | MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition | |
| COGMEN | - | - | COGMEN: COntextualized GNN based Multimodal Emotion recognitioN | |
| bc-LSTM | - | - | 0/1 Deep Neural Networks via Block Coordinate Descent | - |
0 of 10 row(s) selected.