HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
神经架构搜索
Neural Architecture Search On Cifar 10
Neural Architecture Search On Cifar 10
评估指标
Parameters
Search Time (GPU days)
Top-1 Error Rate
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Parameters
Search Time (GPU days)
Top-1 Error Rate
Paper Title
Repository
GDAS
-
0.21
3.4%
Searching for A Robust Neural Architecture in Four GPU Hours
Bonsai-Net
2.9M
0.10
3.35%
Bonsai-Net: One-Shot Neural Architecture Search via Differentiable Pruners
Net2 (2)
-
-
3.3%
Efficacy of Neural Prediction-Based Zero-Shot NAS
μDARTS
-
0.1
3.277%
$μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search
-
NN-MASS- CIFAR-C
3.82M
0
3.18%
How does topology of neural architectures impact gradient propagation and model performance?
-
NN-MASS- CIFAR-A
5.02M
0
3.0%
How does topology of neural architectures impact gradient propagation and model performance?
-
DARTS (first order)
3.3
1.5
3%
DARTS: Differentiable Architecture Search
NASGEP
-
1
2.82%
Optimizing Neural Architecture Search using Limited GPU Time in a Dynamic Search Space: A Gene Expression Programming Approach
AlphaX-1 (cutout NASNet)
-
224
2.82%
AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search
DARTS (second order)
3.3
4
2.76%
DARTS: Differentiable Architecture Search
SETN (T=1K) + CutOut
-
1.8
2.69%
One-Shot Neural Architecture Search via Self-Evaluated Template Network
DARTS-PRIME
3.7M
0.5
2.62%
DARTS-PRIME: Regularization and Scheduling Improve Constrained Optimization in Differentiable NAS
-
NAT-M1
4.3M
1.0
2.6%
Neural Architecture Transfer
PC-DARTS
3.6M
0.1
2.57%
PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search
arch2vec
3.6M
10.5
2.56%
Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
FairDARTS-a
2.8M
0.25
2.54%
Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search
MSR-DARTS
4.0M
0.3
2.54%
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search
-
Soft Parameter Sharing
-
0.7
2.53%
Learning Implicitly Recurrent CNNs Through Parameter Sharing
β-DARTS
-
-
2.53%
$β$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search
TNASP
3.7M
0.3
2.52%
TNASP: A Transformer-based NAS Predictor with a Self-evolution Framework
-
0 of 41 row(s) selected.
Previous
Next
Neural Architecture Search On Cifar 10 | SOTA | HyperAI超神经