Image Classification On Omnibenchmark

评估指标

Average Top-1 Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
NOAH-ViTB/1647.6Neural Prompt Search
SwinTransformer46.4Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Bamboo-R5045.4Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
Adapter-ViTB/1644.5Parameter-Efficient Transfer Learning for NLP
CLIP-RN5042.1Learning Transferable Visual Models From Natural Language Supervision
IG-1B40.4Billion-scale semi-supervised learning for image classification
BiT-M40.4Big Transfer (BiT): General Visual Representation Learning
DINO38.9Emerging Properties in Self-Supervised Vision Transformers
SwAV38.3Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
ResNet-10137.4Deep Residual Learning for Image Recognition
MEAL-V236.6MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
MoPro-V236.1MoPro: Webly Supervised Learning with Momentum Prototypes
EfficientNetB435.8EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
MoCoV234.8Momentum Contrast for Unsupervised Visual Representation Learning
ResNet-5034.3Deep Residual Learning for Image Recognition
InceptionV432.3Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
MLP-Mixer32.2MLP-Mixer: An all-MLP Architecture for Vision
Manifold31.6Manifold Mixup: Better Representations by Interpolating Hidden States
CutMix31.1CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
ReLabel30.8Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
0 of 22 row(s) selected.
Image Classification On Omnibenchmark | SOTA | HyperAI超神经