HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Contextual Inter-modal Attention for Multi-modal Sentiment Analysis

{Md. Shad Akhtar Dushyant Chauhan Soujanya Poria Asif Ekbal Pushpak Bhattacharyya Deepanway Ghosal}

Contextual Inter-modal Attention for Multi-modal Sentiment Analysis

Abstract

Multi-modal sentiment analysis offers various challenges, one being the effective combination of different input modalities, namely text, visual and acoustic. In this paper, we propose a recurrent neural network based multi-modal attention framework that leverages the contextual information for utterance-level sentiment prediction. The proposed approach applies attention on multi-modal multi-utterance representations and tries to learn the contributing features amongst them. We evaluate our proposed approach on two multi-modal sentiment analysis benchmark datasets, viz. CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) corpus and the recently released CMU Multi-modal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Evaluation results show the effectiveness of our proposed approach with the accuracies of 82.31{%} and 79.80{%} for the MOSI and MOSEI datasets, respectively. These are approximately 2 and 1 points performance improvement over the state-of-the-art models for the datasets.

Benchmarks

BenchmarkMethodologyMetrics
multimodal-sentiment-analysis-on-mosiMMMU-BA
Accuracy: 82.31%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Contextual Inter-modal Attention for Multi-modal Sentiment Analysis | Papers | HyperAI