Command Palette
Search for a command to run...
{Changkyu Choi Jae-Joon Han Jinwoo Shin Ji-won Baek Seong-Jin Park Seungju Han Insoo Kim}

Abstract
Softmax-based learning methods have shown state-of-the-art performances on large-scale face recognition tasks. In this paper, we discover an important issue of softmax-based approaches: the sample features around the corresponding class weight are similarly penalized in the training phase even though their directions are different from each other. This directional discrepancy, i.e., process discrepancy leads to performance degradation at the evaluation phase. To mitigate the issue, we propose a novel training scheme, called minimum discrepancy learning that enforces directions of intra-class sample features to be aligned toward an optimal direction by using a single learnable basis. Furthermore, the single learnable basis facilitates disentangling the so-called class-invariant vectors from sample features, such that they are effective to train under class-imbalanced datasets.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| face-recognition-on-cfp-fp | DiscFace | Accuracy: 0.9854 |
| face-recognition-on-lfw | DiscFace | Accuracy: 0.9983 |
| face-verification-on-agedb-30 | DiscFace | Accuracy: 0.9835 |
| face-verification-on-calfw | DiscFace | Accuracy: 96.15 |
| face-verification-on-cplfw | DiscFace | Accuracy: 93.37 |
| face-verification-on-megaface | DiscFace | Accuracy: 97.44% |
| face-verification-on-qmul-survface | DiscFace | TAR @ FAR=0.1: 35.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.