HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-attention fusion for audiovisual emotion recognition with incomplete data

Kateryna Chumachenko Alexandros Iosifidis Moncef Gabbouj

Self-attention fusion for audiovisual emotion recognition with incomplete data

Abstract

In this paper, we consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition. We propose an architecture capable of learning from raw data and describe three variants of it with distinct modality fusion mechanisms. While most of the previous works consider the ideal scenario of presence of both modalities at all times during inference, we evaluate the robustness of the model in the unconstrained settings where one modality is absent or noisy, and propose a method to mitigate these limitations in a form of modality dropout. Most importantly, we find that following this approach not only improves performance drastically under the absence/noisy representations of one modality, but also improves the performance in a standard ideal setting, outperforming the competing methods.

Code Repositories

katerynaCh/multimodal-emotion-recognition
Official
pytorch
Mentioned in GitHub
shravan-18/AVTCA
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
emotion-recognition-on-ravdessIntermediate-Attention-Fusion
Accuracy: 81.58%
facial-emotion-recognition-on-ravdessIntermediate-Transformer-Fusion, visual branch only
Accuracy: 74.92%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Self-attention fusion for audiovisual emotion recognition with incomplete data | Papers | HyperAI