HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MAT: Mask-Aware Transformer for Large Hole Image Inpainting

Wenbo Li Zhe Lin Kun Zhou Lu Qi Yi Wang Jiaya Jia

MAT: Mask-Aware Transformer for Large Hole Image Inpainting

Abstract

Recent studies have shown the importance of modeling long-range interactions in the inpainting problem. To achieve this goal, existing approaches exploit either standalone attention techniques or transformers, but usually under a low resolution in consideration of computational cost. In this paper, we present a novel transformer-based model for large hole inpainting, which unifies the merits of transformers and convolutions to efficiently process high-resolution images. We carefully design each component of our framework to guarantee the high fidelity and diversity of recovered images. Specifically, we customize an inpainting-oriented transformer block, where the attention module aggregates non-local information only from partial valid tokens, indicated by a dynamic mask. Extensive experiments demonstrate the state-of-the-art performance of the new model on multiple benchmark datasets. Code is released at https://github.com/fenglinglwb/MAT.

Code Repositories

fenglinglwb/mat
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-inpainting-on-celeba-hqMAT
FID: 4.86
P-IDS: 13.83
U-IDS: 25.33
image-inpainting-on-places2-1MAT
FID: 1.96
P-IDS: 23.42
U-IDS: 38.34

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MAT: Mask-Aware Transformer for Large Hole Image Inpainting | Papers | HyperAI