HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
少样本三维点云分类
Few Shot 3D Point Cloud Classification On 3
Few Shot 3D Point Cloud Classification On 3
评估指标
Overall Accuracy
Standard Deviation
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Overall Accuracy
Standard Deviation
Paper Title
Repository
Point-JEPA
95.0
3.6
Point-JEPA: A Joint Embedding Predictive Architecture for Self-Supervised Learning on Point Cloud
ReCon++
94.5
4.1
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
3D-JEPA
94.3
3.6
3D-JEPA: A Joint Embedding Predictive Architecture for 3D Self-Supervised Representation Learning
-
PointGPT
94.3
3.3
PointGPT: Auto-regressively Generative Pre-training from Point Clouds
Point-FEMAE
94.0
-
Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders
point2vec
93.9
4.1
Point2Vec for Self-Supervised Representation Learning on Point Clouds
PCP-MAE
93.5
3.7
PCP-MAE: Learning to Predict Centers for Point Masked Autoencoders
Point-RAE
93.3
4.0
Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning
ReCon
93.3
3.9
Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
ACT
93.3
4.0
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
OTMae3D
93.2
3.4
-
-
IDPT
92.8
-
Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models
Point-LGMask
92.6
4.3
Point-LGMask: Local and Global Contexts Embedding for Point Cloud Pre-training with Multi-Ratio Masking
-
Point-MAE
92.6
4.1
Masked Autoencoders for Point Cloud Self-supervised Learning
I2P-MAE
92.6
5.0
Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders
Point-M2AE
92.3
4.5
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
MaskPoint
91.4
4.0
Masked Discrimination for Self-Supervised Learning on Point Clouds
Point-BERT
91.0
5.4
Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling
CrossMoCo
88.7
3.9
CrossMoCo: Multi-modal Momentum Contrastive Learning for Point Cloud
-
OcCo+PointNet
83.9
1.8
Unsupervised Point Cloud Pre-Training via Occlusion Completion
0 of 31 row(s) selected.
Previous
Next
Few Shot 3D Point Cloud Classification On 3 | SOTA | HyperAI超神经