Command Palette
Search for a command to run...

Abstract
The advent of large-scale vision foundation models, pre-trained on diversenatural images, has marked a paradigm shift in computer vision. However, howthe frontier vision foundation models' efficacies transfer to specializeddomains remains such as medical imaging remains an open question. This reportinvestigates whether DINOv3, a state-of-the-art self-supervised visiontransformer (ViT) that features strong capability in dense prediction tasks,can directly serve as a powerful, unified encoder for medical vision taskswithout domain-specific pre-training. To answer this, we benchmark DINOv3across common medical vision tasks, including 2D/3D classification andsegmentation on a wide range of medical imaging modalities. We systematicallyanalyze its scalability by varying model sizes and input image resolutions. Ourfindings reveal that DINOv3 shows impressive performance and establishes aformidable new baseline. Remarkably, it can even outperform medical-specificfoundation models like BiomedCLIP and CT-Net on several tasks, despite beingtrained solely on natural images. However, we identify clear limitations: Themodel's features degrade in scenarios requiring deep domain specialization,such as in Whole-Slide Pathological Images (WSIs), Electron Microscopy (EM),and Positron Emission Tomography (PET). Furthermore, we observe that DINOv3does not consistently obey scaling law in the medical domain; performance doesnot reliably increase with larger models or finer feature resolutions, showingdiverse scaling behaviors across tasks. Ultimately, our work establishes DINOv3as a strong baseline, whose powerful visual features can serve as a robustprior for multiple complex medical tasks. This opens promising futuredirections, such as leveraging its features to enforce multiview consistency in3D reconstruction.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.