Command Palette
Search for a command to run...
Boutros Fadi ; Damer Naser ; Fang Meiling ; Kirchbuchner Florian ; Kuijper Arjan

Abstract
In this paper, we present a set of extremely efficient and high throughputmodels for accurate face verification, MixFaceNets which are inspired by MixedDepthwise Convolutional Kernels. Extensive experiment evaluations on Label Facein the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-Cdatasets have shown the effectiveness of our MixFaceNets for applicationsrequiring extremely low computational complexity. Under the same level ofcomputation complexity (< 500M FLOPs), our MixFaceNets outperformMobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW,97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (atFAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computationalcomplexity between 500M and 1G FLOPs, our MixFaceNets achieved resultscomparable to the top-ranked models, while using significantly fewer FLOPs andless computation overhead, which proves the practical value of our proposedMixFaceNets. All training codes, pre-trained models, and training logs havebeen made available https://github.com/fdbtrs/mixfacenets.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| lightweight-face-recognition-on-ijb-b | MixFaceNet-S | MFLOPs: 451.7 TAR @ FAR=0.01: 0.9017 |
| lightweight-face-recognition-on-ijb-c | MixFaceNet-S | MFLOPs: 451.7 TAR @ FAR=0.01: 0.9230 |
| lightweight-face-recognition-on-lfw | MixFaceNet-S | Accuracy: 0.996 MFLOPs: 451.7 MParams: 3.07 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.