Command Palette
Search for a command to run...
Shiry Ginosar; Amir Bar; Gefen Kohavi; Caroline Chan; Andrew Owens; Jitendra Malik

Abstract
Human speech is often accompanied by hand and arm gestures. Given audio speech input, we generate plausible gestures to go along with the sound. Specifically, we perform cross-modal translation from "in-the-wild'' monologue speech of a single speaker to their hand and arm motion. We train on unlabeled videos for which we only have noisy pseudo ground truth from an automatic pose detection system. Our proposed model significantly outperforms baseline methods in a quantitative comparison. To support research toward obtaining a computational understanding of the relationship between gesture and speech, we release a large video dataset of person-specific gestures. The project website with video, code and data can be found at http://people.eecs.berkeley.edu/~shiry/speech2gesture .
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| gesture-generation-on-beat | Speech2Gestures | FID: 256.7 |
| gesture-generation-on-beat2 | S2G | FGD: 2.815 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.