HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Unified speech and gesture synthesis using flow matching

Shivam Mehta Ruibo Tu Simon Alexanderson Jonas Beskow Éva Székely Gustav Eje Henter

Unified speech and gesture synthesis using flow matching

Abstract

As text-to-speech technologies achieve remarkable naturalness in read-aloud tasks, there is growing interest in multimodal synthesis of verbal and non-verbal communicative behaviour, such as spontaneous speech and associated body gestures. This paper presents a novel, unified architecture for jointly synthesising speech acoustics and skeleton-based 3D gesture motion from text, trained using optimal-transport conditional flow matching (OT-CFM). The proposed architecture is simpler than the previous state of the art, has a smaller memory footprint, and can capture the joint distribution of speech and gestures, generating both modalities together in one single process. The new training regime, meanwhile, enables better synthesis quality in much fewer steps (network evaluations) than before. Uni- and multimodal subjective tests demonstrate improved speech naturalness, gesture human-likeness, and cross-modal appropriateness compared to existing benchmarks. Please see https://shivammehta25.github.io/Match-TTSG/ for video examples and code.

Benchmarks

BenchmarkMethodologyMetrics
motion-synthesis-on-trinity-speech-gestureMatch-TTSG
Mean Opinion Score: 3.44
text-to-speech-synthesis-on-trinity-speechMatch-TTSG
MOS: 3.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp