Command Palette
Search for a command to run...
Chao Feng; Ziyang Chen; Andrew Owens

Abstract
Manipulated videos often contain subtle inconsistencies between their visual and audio signals. We propose a video forensics method, based on anomaly detection, that can identify these inconsistencies, and that can be trained solely using real, unlabeled data. We train an autoregressive model to generate sequences of audio-visual features, using feature sets that capture the temporal synchronization between video frames and sound. At test time, we then flag videos that the model assigns low probability. Despite being trained entirely on real videos, our model obtains strong performance on the task of detecting manipulated speech videos. Project site: https://cfeng16.github.io/audio-visual-forensics
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| deepfake-detection-on-fakeavceleb-1 | AVAD | AP: 94.2 ROC AUC: 94.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.