Command Palette
Search for a command to run...
Self-attention fusion for audiovisual emotion recognition with incomplete data
Kateryna Chumachenko Alexandros Iosifidis Moncef Gabbouj

Abstract
In this paper, we consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition. We propose an architecture capable of learning from raw data and describe three variants of it with distinct modality fusion mechanisms. While most of the previous works consider the ideal scenario of presence of both modalities at all times during inference, we evaluate the robustness of the model in the unconstrained settings where one modality is absent or noisy, and propose a method to mitigate these limitations in a form of modality dropout. Most importantly, we find that following this approach not only improves performance drastically under the absence/noisy representations of one modality, but also improves the performance in a standard ideal setting, outperforming the competing methods.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| emotion-recognition-on-ravdess | Intermediate-Attention-Fusion | Accuracy: 81.58% |
| facial-emotion-recognition-on-ravdess | Intermediate-Transformer-Fusion, visual branch only | Accuracy: 74.92% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.