HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

Xiangxiang Chu; Bo Zhang; Ruijun Xu

FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

Abstract

One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet. The models and their evaluation codes are made publicly available online http://github.com/fairnas/FairNAS .

Code Repositories

xiaomi-automl/FairNAS
pytorch
Mentioned in GitHub
fairnas/FairNAS
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetFairNAS-A
GFLOPs: 0.776
Number of params: 4.6M
Top 1 Accuracy: 75.34%
image-classification-on-imagenetFairNAS-C
GFLOPs: 0.642
Number of params: 4.4M
Top 1 Accuracy: 74.69%
image-classification-on-imagenetFairNAS-B
GFLOPs: 0.690
Number of params: 4.5M
Top 1 Accuracy: 75.10%
neural-architecture-search-on-cifar-10FairNAS-A
FLOPS: 391
Parameters: 3
Search Time (GPU days): 8
Top-1 Error Rate: 1.8%
neural-architecture-search-on-imagenetFairNAS-B
Accuracy: 75.1
MACs: 345M
Params: 4.5M
Top-1 Error Rate: 24.9
neural-architecture-search-on-imagenetFairNAS-A
Accuracy: 75.34
MACs: 388M
Params: 4.6M
Top-1 Error Rate: 24.7
neural-architecture-search-on-imagenetFairNAS-C
Accuracy: 74.69
MACs: 321M
Params: 4.4M
Top-1 Error Rate: 25.4
neural-architecture-search-on-nas-bench-201FairNAS
Accuracy (Test): 42.19
Search time (s): 9845
neural-architecture-search-on-nas-bench-201-1FairNAS
Accuracy (Test): 93.23
Accuracy (Val): 90.07
Search time (s): 9845
neural-architecture-search-on-nas-bench-201-2FairNAS
Accuracy (Test): 71.00
Accuracy (Val): 70.94
Search time (s): 9845
neural-architecture-search-on-nats-benchFairNAS (Chu et al., 2021)
Test Accuracy: 42.19
neural-architecture-search-on-nats-bench-1FairNAS (Chu et al., 2021)
Test Accuracy: 93.23
neural-architecture-search-on-nats-bench-2FairNAS (Chu et al., 2021)
Test Accuracy: 71.00

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search | Papers | HyperAI