HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions

Wenhai Wang Enze Xie Xiang Li Deng-Ping Fan Kaitao Song Ding Liang Tong Lu Ping Luo Ling Shao

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions

Abstract

Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches. Code is available at https://github.com/whai362/PVT.

Code Repositories

xiaohu2015/pvt_detectron2
pytorch
Mentioned in GitHub
open-mmlab/mmpose
pytorch
Mentioned in GitHub
microsoft/vision-longformer
pytorch
Mentioned in GitHub
DarshanDeshpande/jax-models
jax
Mentioned in GitHub
hustvl/sparseinst
pytorch
Mentioned in GitHub
whai362/PVT
Official
pytorch
Mentioned in GitHub
wangermeng2021/PVT-tensorflow2
tf
Mentioned in GitHub
SforAiDl/vformer
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
object-detection-on-coco-minivalPVT-Large (RetinaNet 3x,MS)
AP50: 63.6
AP75: 46.1
APL: 59.5
APM: 46.0
APS: 26.1
box AP: 43.4
object-detection-on-coco-minivalPVT-Large (RetinaNet 1x)
AP50: 63.7
AP75: 45.4
APL: 58.4
APM: 46.0
APS: 25.8
box AP: 42.6
semantic-segmentation-on-densepassPVT (Tiny, FPN)
mIoU: 31.20%
semantic-segmentation-on-synpassPVT
mIoU: 32.68%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions | Papers | HyperAI