HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Frame attention networks for facial expression recognition in videos

Debin Meng; Xiaojiang Peng; Kai Wang; Yu Qiao

Frame attention networks for facial expression recognition in videos

Abstract

The video-based facial expression recognition aims to classify a given video into several basic emotions. How to integrate facial features of individual frames is crucial for this task. In this paper, we propose the Frame Attention Networks (FAN), to automatically highlight some discriminative frames in an end-to-end framework. The network takes a video with a variable number of face images as its input and produces a fixed-dimension representation. The whole network is composed of two modules. The feature embedding module is a deep Convolutional Neural Network (CNN) which embeds face images into feature vectors. The frame attention module learns multiple attention weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation. We conduct extensive experiments on CK+ and AFEW8.0 datasets. Our proposed FAN shows superior performance compared to other CNN based methods and achieves state-of-the-art performance on CK+.

Code Repositories

Open-Debin/Emotion-FAN
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
facial-expression-recognition-on-acted-facialresnet18
Accuracy(on validation set): 51.181%
facial-expression-recognition-on-ckFAN
Accuracy (7 emotion): 99.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Frame attention networks for facial expression recognition in videos | Papers | HyperAI