HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
高效视觉Transformer
Efficient Vits On Imagenet 1K With Deit S
Efficient Vits On Imagenet 1K With Deit S
评估指标
GFLOPs
Top 1 Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
GFLOPs
Top 1 Accuracy
Paper Title
Repository
Base (DeiT-S)
4.6
79.8
Training data-efficient image transformers & distillation through attention
EViT (90%)
4.0
79.8
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
DynamicViT (90%)
4.0
79.8
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
SPViT (3.9G)
3.9
79.8
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
LTMP (80%)
3.8
79.8
Learned Thresholds Token Merging and Pruning for Vision Transformers
A-ViT
3.6
78.6
AdaViT: Adaptive Tokens for Efficient Vision Transformer
EViT (80%)
3.5
79.8
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
ToMe ($r=8$)
3.4
79.7
Token Merging: Your ViT But Faster
DynamicViT (80%)
3.4
79.8
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
SPViT
3.3
78.3
Pruning Self-attentions into Convolutional Layers in Single Path
IA-RED$^2$
3.2
79.1
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
-
S$^2$ViTE
3.2
79.2
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
BAT (70%)
3.0
79.6
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers
AS-DeiT-S (65%)
3.0
79.6
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
EViT (70%)
3.0
79.5
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
LTMP (60%)
3.0
79.6
Learned Thresholds Token Merging and Pruning for Vision Transformers
EvoViT
3.0
79.4
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
eTPS
3.0
79.7
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
dTPS
3.0
80.1
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
DynamicViT (70%)
2.9
79.3
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
0 of 41 row(s) selected.
Previous
Next
Efficient Vits On Imagenet 1K With Deit S | SOTA | HyperAI超神经