Command Palette
Search for a command to run...
Zhen Li; Cheng-Ze Lu; Jianhua Qin; Chun-Le Guo; Ming-Ming Cheng

Abstract
Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories. However, the hand-crafted flow-based processes in these methods are applied separately to form the whole inpainting pipeline. Thus, these methods are less efficient and rely heavily on the intermediate results from earlier stages. In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E$^2$FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules. The three modules correspond with the three stages of previous flow-based methods but can be jointly optimized, leading to a more efficient and effective inpainting process. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively and shows promising efficiency. The code is available at https://github.com/MCG-NKU/E2FGVI.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| seeing-beyond-the-visible-on-kitti360-ex | E2FGVI | Average PSNR: 19.45 |
| video-inpainting-on-davis | E2FGVI | Ewarp: 0.1315 PSNR: 33.01 SSIM: 0.9721 VFID: 0.116 |
| video-inpainting-on-hqvi-240p | E2FGVI | LPIPS: 0.0401 PSNR: 30.63 SSIM: 0.9427 VFID: 0.1885 |
| video-inpainting-on-youtube-vos | E2FGVI | Ewarp: 0.0864 PSNR: 33.71 SSIM: 0.9700 VFID: 0.046 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.