HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Pre-Trained Image Processing Transformer

Pre-Trained Image Processing Transformer

Abstract

As the computing power of modern hardware is increasing strongly, pre-traineddeep learning models (e.g., BERT, GPT-3) learned on large-scale datasets haveshown their effectiveness over conventional methods. The big progress is mainlycontributed to the representation ability of transformer and its variantarchitectures. In this paper, we study the low-level computer vision task(e.g., denoising, super-resolution and deraining) and develop a new pre-trainedmodel, namely, image processing transformer (IPT). To maximally excavate thecapability of transformer, we present to utilize the well-known ImageNetbenchmark for generating a large amount of corrupted image pairs. The IPT modelis trained on these images with multi-heads and multi-tails. In addition, thecontrastive learning is introduced for well adapting to different imageprocessing tasks. The pre-trained model can therefore efficiently employed ondesired task after fine-tuning. With only one pre-trained model, IPToutperforms the current state-of-the-art methods on various low-levelbenchmarks. Code is available at https://github.com/huawei-noah/Pretrained-IPTand https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/IPT

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Pre-Trained Image Processing Transformer | Papers | HyperAI