HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos

Yin Chen Jia Li Shiguang Shan Meng Wang Richang Hong

From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos

Abstract

Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations, e.g., insufficient quantity and diversity of pose, occlusion and illumination, as well as the inherent ambiguity of facial expressions. In contrast, static facial expression recognition (SFER) currently shows much higher performance and can benefit from more abundant high-quality training data. Moreover, the appearance features and dynamic dependencies of DFER remain largely unexplored. To tackle these challenges, we introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features, thereby significantly improving DFER performance. Firstly, we build and train an image model for SFER, which incorporates a standard Vision Transformer (ViT) and Multi-View Complementary Prompters (MCPs) only. Then, we obtain our video model (i.e., S2D), for DFER, by inserting Temporal-Modeling Adapters (TMAs) into the image model. MCPs enhance facial expression features with landmark-aware features inferred by an off-the-shelf facial landmark detector. And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos. Notably, MCPs and TMAs only increase a fraction of trainable parameters (less than +10\%) to the original image model. Moreover, we present a novel Emotion-Anchors (i.e., reference samples for each emotion category) based Self-Distillation Loss to reduce the detrimental influence of ambiguous emotion labels, further enhancing our S2D. Experiments conducted on popular SFER and DFER datasets show that we achieve the state of the art.

Code Repositories

msa-lmc/s2d
Official
pytorch
FER-LMC/S2D
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
facial-expression-recognition-on-affectnetS2D
Accuracy (7 emotion): 67.62
Accuracy (8 emotion): 63.06
facial-expression-recognition-on-raf-dbS2D
Overall Accuracy: 92.57

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos | Papers | HyperAI