Command Palette
Search for a command to run...
Pauline Luc Aidan Clark Sander Dieleman Diego de Las Casas Yotam Doron Albin Cassirer Karen Simonyan

Abstract
Recent breakthroughs in adversarial generative modeling have led to models capable of producing video samples of high quality, even on large and complex datasets of real-world video. In this work, we focus on the task of video prediction, where given a sequence of frames extracted from a video, the goal is to generate a plausible future sequence. We first improve the state of the art by performing a systematic empirical study of discriminator decompositions and proposing an architecture that yields faster convergence and higher performance than previous approaches. We then analyze recurrent units in the generator, and propose a novel recurrent unit which transforms its past hidden state according to predicted motion-like features, and refines it to handle dis-occlusions, scene changes and other complex behavior. We show that this recurrent unit consistently outperforms previous designs. Our final model leads to a leap in the state-of-the-art performance, obtaining a test set Frechet Video Distance of 25.7, down from 69.2, on the large-scale Kinetics-600 dataset.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-generation-on-bair-robot-pushing | TrIVD-GAN-FP | Cond: 1 FVD score: 103.3 Pred: 15 Train: 15 |
| video-prediction-on-kinetics-600-12-frames | TriVD-GAN-FP | Cond: 5 FVD: 25.74±0.66 IS: 12.54±0.06 Pred: 11 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.