Command Palette
Search for a command to run...
Roberto Valle José Miguel Buenaposada Luis Baumela

Abstract
We present a deep learning-based multi-task approach for head pose estimation in images. We contribute with a network architecture and training strategy that harness the strong dependencies among face pose, alignment and visibility, to produce a top performing model for all three tasks. Our architecture is an encoder-decoder CNN with residual blocks and lateral skip connections. We show that the combination of head pose estimation and landmark-based face alignment significantly improve the performance of the former task. Further, the location of the pose task at the bottleneck layer, at the end of the encoder, and that of tasks depending on spatial information, such as visibility and alignment, in the final decoder layer, also contribute to increase the final performance. In the experiments conducted the proposed model outperforms the state-of-the-art in the face pose and visibility tasks. By including a final landmark regression step it also produces face alignment results on par with the state-of-the-art.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| face-alignment-on-aflw2000 | MNN+ORB (Reannotated) | Error rate: 2.58 |
| face-alignment-on-aflw2000-3d | MNN+OR (reannotated) | Balanced NME (2D Sparse Alignment): 2.58% |
| face-alignment-on-cofw | MNN (Inter-pupil Norm) | NME (inter-pupil): 5.65% |
| face-alignment-on-cofw | MNN+OR (Inter-pupils Norm) | NME (inter-pupil): 5.04% Recall at 80% precision (Landmarks Visibility): 72.12 |
| head-pose-estimation-on-aflw | MNN | MAE: 3.22 |
| head-pose-estimation-on-aflw2000 | MNN | MAE: 3.83 |
| head-pose-estimation-on-biwi | MNN | MAE (trained with other data): 3.66 |
| pose-estimation-on-300w-full | MNN | MAE mean (º): 1.56 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.