Command Palette
Search for a command to run...
Triantafyllos Afouras; Joon Son Chung; Andrew Senior; Oriol Vinyals; Andrew Zisserman

Abstract
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-visual-speech-recognition-on-lrs2 | TM-CTC | Test WER: 8.2 |
| audio-visual-speech-recognition-on-lrs2 | TM-Seq2seq | Test WER: 8.5 |
| audio-visual-speech-recognition-on-lrs3-ted | TM-seq2seq | Word Error Rate (WER): 7.2 |
| automatic-speech-recognition-on-lrs2 | TM-CTC | Test WER: 10.1 |
| automatic-speech-recognition-on-lrs2 | TM-seq2seq | Test WER: 9.7 |
| lipreading-on-lrs2 | TM-seq2seq + extLM | Word Error Rate (WER): 48.3 |
| lipreading-on-lrs2 | TM-CTC + extLM | Word Error Rate (WER): 54.7 |
| lipreading-on-lrs3-ted | TM-seq2seq | Word Error Rate (WER): 58.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.