Domain Generalization On Imagenet R

评估指标

Top-1 Error Rate

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
Mixer-B/8-SAM76.5When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
ViT-B/16-SAM73.6When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
ResNet-152x2-SAM71.9When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
ResNet-5063.9Deep Residual Learning for Image Recognition
AugMix (ResNet-50)58.9AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Stylized ImageNet (ResNet-50)58.5 ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
DeepAugment (ResNet-50)57.8The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
PRIME (ResNet-50)57.1PRIME: A few primitives can boost robustness to common corruptions
RVT-Ti*56.1Towards Robust Vision Transformer
PRIME with JSD (ResNet-50)53.7PRIME: A few primitives can boost robustness to common corruptions
DeepAugment+AugMix (ResNet-50) 53.2 The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
RVT-S*52.3Towards Robust Vision Transformer
Sequencer2D-L51.9Sequencer: Deep LSTM for Image Classification
RVT-B*51.3Towards Robust Vision Transformer
ConvFormer-B3648.9MetaFormer Baselines for Vision
ConvFormer-B36 (384)47.8MetaFormer Baselines for Vision
CAFormer-B3646.1MetaFormer Baselines for Vision
Pyramid Adversarial Training Improves ViT46.08Pyramid Adversarial Training Improves ViT Performance
CAFormer-B36 (384)45MetaFormer Baselines for Vision
DiscreteViT44.74Discrete Representations Strengthen Vision Transformer Robustness
0 of 39 row(s) selected.
Domain Generalization On Imagenet R | SOTA | HyperAI超神经