
摘要
多模态对话情感识别(MERC)近年来受到了广泛的研究关注。现有的MERC方法面临几个挑战:(1)未能充分利用直接的跨模态线索,可能导致不够彻底的跨模态建模;(2)在每个网络层同时从相同和不同的模态中提取信息,可能引发多源数据融合的冲突;(3)缺乏检测动态情感变化所需的灵活性,可能导致对突然情感转变的语句分类不准确。为了解决这些问题,提出了一种名为GraphSmile的新方法,用于跟踪多模态对话中的复杂情感线索。GraphSmile包括两个关键组件,即GSF模块和SDP模块。GSF巧妙地利用图结构逐层交替吸收跨模态和内模态的情感依赖关系,充分捕捉跨模态线索的同时有效避免了融合冲突。SDP是一个辅助任务,明确描绘了语句之间的情感动态变化,增强了模型区分情感差异的能力。此外,GraphSmile可以轻松应用于多模态对话情感分析(MSAC),构建了一个统一的多模态情感模型,能够执行MERC和MSAC任务。多个基准测试的实证结果表明,GraphSmile能够处理复杂的情感和情绪模式,在性能上显著优于基线模型。
代码仓库
lijfrank-open/GraphSmile
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| emotion-recognition-in-conversation-on | GraphSmile | Accuracy: 72.77 Weighted-F1: 72.81 |
| emotion-recognition-in-conversation-on-7 | GraphSmile | Accuracy: 86.53 Weighted F1: 86.52 |
| emotion-recognition-in-conversation-on-cmu-2 | GraphSmile | Accuracy: 46.82 Weighted F1: 44.93 |
| emotion-recognition-in-conversation-on-cmu-3 | GraphSmile | Accuracy: 67.73 Weighted F1: 66.73 |
| emotion-recognition-in-conversation-on-meld | GraphSmile | Accuracy: 67.70 Weighted-F1: 66.71 |
| emotion-recognition-in-conversation-on-meld-1 | GraphSmile | Accuracy: 74.44 Weighted F1: 74.31 |
| multimodal-emotion-recognition-on-cmu-mosei-1 | GraphSmile | Accuracy: 46.82 Weighted F1: 44.93 |
| multimodal-emotion-recognition-on-cmu-mosei-2 | GraphSmile | Accuracy: 67.73 Weighted F1: 66.73 |
| multimodal-emotion-recognition-on-iemocap | GraphSmile | Accuracy: 72.77 Weighted F1: 72.81 |
| multimodal-emotion-recognition-on-iemocap-4 | GraphSmile | Accuracy: 86.53 Weighted F1: 86.52 |
| multimodal-emotion-recognition-on-meld | GraphSmile | Accuracy: 67.70 Weighted F1: 66.71 |
| multimodal-emotion-recognition-on-meld-1 | GraphSmile | Accuracy: 74.44 Weighted F1: 74.31 |