
摘要
在权重共享神经架构搜索中,最核心的问题之一是在预定义的搜索空间内评估候选模型。在实际应用中,通常训练一个一次性超级网络(one-shot supernet)作为评估器。一个忠实的排名无疑会带来更准确的搜索结果。然而,当前的方法容易出现误判。本文证明了这些方法的偏差评估是由于超级网络训练中的固有不公平性所致。为此,我们提出了两个层次的约束条件:期望公平性和严格公平性。特别是,严格公平性确保在整个训练过程中所有选择块都获得平等的优化机会,既不会高估也不会低估它们的能力。我们展示了这一点对于提高模型排名的信心至关重要。通过将提出公平性约束下训练的一次性超级网络与多目标进化搜索算法相结合,我们获得了多种最先进的模型,例如 FairNAS-A 在 ImageNet 上达到了 77.5% 的 top-1 验证准确率。这些模型及其评估代码已在线公开发布于 http://github.com/fairnas/FairNAS 。
代码仓库
xiaomi-automl/FairNAS
pytorch
GitHub 中提及
fairnas/FairNAS
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| image-classification-on-imagenet | FairNAS-A | GFLOPs: 0.776 Number of params: 4.6M Top 1 Accuracy: 75.34% |
| image-classification-on-imagenet | FairNAS-C | GFLOPs: 0.642 Number of params: 4.4M Top 1 Accuracy: 74.69% |
| image-classification-on-imagenet | FairNAS-B | GFLOPs: 0.690 Number of params: 4.5M Top 1 Accuracy: 75.10% |
| neural-architecture-search-on-cifar-10 | FairNAS-A | FLOPS: 391 Parameters: 3 Search Time (GPU days): 8 Top-1 Error Rate: 1.8% |
| neural-architecture-search-on-imagenet | FairNAS-B | Accuracy: 75.1 MACs: 345M Params: 4.5M Top-1 Error Rate: 24.9 |
| neural-architecture-search-on-imagenet | FairNAS-A | Accuracy: 75.34 MACs: 388M Params: 4.6M Top-1 Error Rate: 24.7 |
| neural-architecture-search-on-imagenet | FairNAS-C | Accuracy: 74.69 MACs: 321M Params: 4.4M Top-1 Error Rate: 25.4 |
| neural-architecture-search-on-nas-bench-201 | FairNAS | Accuracy (Test): 42.19 Search time (s): 9845 |
| neural-architecture-search-on-nas-bench-201-1 | FairNAS | Accuracy (Test): 93.23 Accuracy (Val): 90.07 Search time (s): 9845 |
| neural-architecture-search-on-nas-bench-201-2 | FairNAS | Accuracy (Test): 71.00 Accuracy (Val): 70.94 Search time (s): 9845 |
| neural-architecture-search-on-nats-bench | FairNAS (Chu et al., 2021) | Test Accuracy: 42.19 |
| neural-architecture-search-on-nats-bench-1 | FairNAS (Chu et al., 2021) | Test Accuracy: 93.23 |
| neural-architecture-search-on-nats-bench-2 | FairNAS (Chu et al., 2021) | Test Accuracy: 71.00 |