Image Classification On Flowers 102

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
CCT-14/7x299.76Escaping the Big Data Paradigm with Compact Transformers
VIT-L/16 (Background)99.75Reduction of Class Activation Uncertainty with Background Information
CvT-W2499.72CvT: Introducing Convolutions to Vision Transformers
Bamboo (ViT-B/16)99.7Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
模型 3699.68An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
EffNet-L2 (SAM)99.65%Sharpness-Aware Minimization for Efficiently Improving Generalization
ALIGN99.65%Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
BiT-L (ResNet)99.63Big Transfer (BiT): General Visual Representation Learning
ConvMLP-S99.5ConvMLP: Hierarchical Convolutional MLPs for Vision
ConvMLP-L99.5ConvMLP: Hierarchical Convolutional MLPs for Vision
ResNet-152x4-AGC (ImageNet-21K)99.49Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images
Wide-ResNet-101 (Spinal FC)99.30SpinalNet: Deep Neural Network with Gradual Input
BiT-M (ResNet)99.30Big Transfer (BiT): General Visual Representation Learning
CaiT-M-36 U 22499.1--
Grafit (RegNet-8GF)99.1%Grafit: Learning fine-grained image representations with coarse labels-
TResNet-L99.1%TResNet: High Performance GPU-Dedicated Architecture
DAT98.9%Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization-
GFNet-H-B98.8Global Filter Networks for Image Classification
EfficientNet-B798.8%EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
DeiT-B98.8%Training data-efficient image transformers & distillation through attention
0 of 51 row(s) selected.
Image Classification On Flowers 102 | SOTA | HyperAI超神经