HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers

Ioannis Romanelis Vlassis Fotis Konstantinos Moustakas Adrian Munteanu

ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers

Abstract

In this paper we delve into the properties of transformers, attained through self-supervision, in the point cloud domain. Specifically, we evaluate the effectiveness of Masked Autoencoding as a pretraining scheme, and explore Momentum Contrast as an alternative. In our study we investigate the impact of data quantity on the learned features, and uncover similarities in the transformer's behavior across domains. Through comprehensive visualiations, we observe that the transformer learns to attend to semantically meaningful regions, indicating that pretraining leads to a better understanding of the underlying geometry. Moreover, we examine the finetuning process and its effect on the learned representations. Based on that, we devise an unfreezing strategy which consistently outperforms our baseline without introducing any other modifications to the model or the training pipeline, and achieve state-of-the-art results in the classification task among transformer models.

Code Repositories

vvrpanda/exppoint-mae
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
3d-point-cloud-classification-on-modelnet40ExpPoint-MAE
Overall Accuracy: 94.2
3d-point-cloud-classification-on-scanobjectnnExpPoint-MAE
OBJ-BG (OA): 90.88
OBJ-ONLY (OA): 90.02

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers | Papers | HyperAI