Command Palette
Search for a command to run...
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
Haoyu Zhang Yu Wang Guanghao Yin Kejun Liu Yuanyuan Liu Tianshu Yu

Abstract
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (e.g., language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Adaptive Language-guided Multimodal Transformer (ALMT), which incorporates an Adaptive Hyper-modality Learning (AHL) module to learn an irrelevance/conflict-suppressing representation from visual and audio features under the guidance of language features at different scales. With the obtained hyper-modality representation, the model can obtain a complementary and joint representation through multimodal fusion for effective MSA. In practice, ALMT achieves state-of-the-art performance on several popular datasets (e.g., MOSI, MOSEI and CH-SIMS) and an abundance of ablation demonstrates the validity and necessity of our irrelevance/conflict suppression mechanism.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| multimodal-sentiment-analysis-on-ch-sims | ALMT | Acc-2: 81.19 Acc-3: 68.93 Acc-5: 45.73 CORR: 0.619 F1: 81.57 MAE: 0.404 |
| multimodal-sentiment-analysis-on-cmu-mosei-1 | ALMT | Acc-5: 55.96 Acc-7: 54.28 Corr: 0.779 MAE: 0.526 |
| multimodal-sentiment-analysis-on-cmu-mosi | ALMT | Acc-5: 56.41 Acc-7: 49.42 Corr: 0.805 MAE: 0.683 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.